00:00:00.001 Started by upstream project "autotest-per-patch" build number 132770 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.175 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:05.562 The recommended git tool is: git 00:00:05.562 using credential 00000000-0000-0000-0000-000000000002 00:00:05.565 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.577 Fetching changes from the remote Git repository 00:00:05.582 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:05.594 Using shallow fetch with depth 1 00:00:05.594 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:05.595 > git --version # timeout=10 00:00:05.606 > git --version # 'git version 2.39.2' 00:00:05.606 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:05.617 Setting http proxy: proxy-dmz.intel.com:911 00:00:05.617 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.966 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.977 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.987 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.987 > git config core.sparsecheckout # timeout=10 00:00:10.998 > git read-tree -mu HEAD # timeout=10 00:00:11.025 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:11.054 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:11.054 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:11.181 [Pipeline] Start of Pipeline 00:00:11.194 [Pipeline] library 00:00:11.195 Loading library shm_lib@master 00:00:11.195 Library shm_lib@master is cached. Copying from home. 00:00:11.212 [Pipeline] node 00:00:11.220 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:11.222 [Pipeline] { 00:00:11.233 [Pipeline] catchError 00:00:11.234 [Pipeline] { 00:00:11.245 [Pipeline] wrap 00:00:11.253 [Pipeline] { 00:00:11.260 [Pipeline] stage 00:00:11.261 [Pipeline] { (Prologue) 00:00:11.277 [Pipeline] echo 00:00:11.278 Node: VM-host-SM0 00:00:11.284 [Pipeline] cleanWs 00:00:11.295 [WS-CLEANUP] Deleting project workspace... 00:00:11.295 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.328 [WS-CLEANUP] done 00:00:11.498 [Pipeline] setCustomBuildProperty 00:00:11.587 [Pipeline] httpRequest 00:00:12.156 [Pipeline] echo 00:00:12.158 Sorcerer 10.211.164.101 is alive 00:00:12.168 [Pipeline] retry 00:00:12.170 [Pipeline] { 00:00:12.210 [Pipeline] httpRequest 00:00:12.215 HttpMethod: GET 00:00:12.216 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.217 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.240 Response Code: HTTP/1.1 200 OK 00:00:12.241 Success: Status code 200 is in the accepted range: 200,404 00:00:12.241 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.883 [Pipeline] } 00:00:17.901 [Pipeline] // retry 00:00:17.910 [Pipeline] sh 00:00:18.202 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.215 [Pipeline] httpRequest 00:00:19.968 [Pipeline] echo 00:00:19.969 Sorcerer 10.211.164.101 is alive 00:00:19.976 [Pipeline] retry 00:00:19.978 [Pipeline] { 00:00:19.988 [Pipeline] httpRequest 00:00:19.992 HttpMethod: GET 00:00:19.992 URL: http://10.211.164.101/packages/spdk_5f032e8b783dade1aea3cd4e9e1ba9cab334d99b.tar.gz 00:00:19.993 Sending request to url: http://10.211.164.101/packages/spdk_5f032e8b783dade1aea3cd4e9e1ba9cab334d99b.tar.gz 00:00:19.999 Response Code: HTTP/1.1 200 OK 00:00:20.000 Success: Status code 200 is in the accepted range: 200,404 00:00:20.000 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_5f032e8b783dade1aea3cd4e9e1ba9cab334d99b.tar.gz 00:02:38.118 [Pipeline] } 00:02:38.136 [Pipeline] // retry 00:02:38.145 [Pipeline] sh 00:02:38.422 + tar --no-same-owner -xf spdk_5f032e8b783dade1aea3cd4e9e1ba9cab334d99b.tar.gz 00:02:41.715 [Pipeline] sh 00:02:41.995 + git -C spdk log --oneline -n5 00:02:41.995 5f032e8b7 lib/reduce: Write Zero to partial chunk when unmapping the chunks. 00:02:41.995 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:41.995 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:41.995 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:41.995 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:42.011 [Pipeline] writeFile 00:02:42.028 [Pipeline] sh 00:02:42.313 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:42.323 [Pipeline] sh 00:02:42.601 + cat autorun-spdk.conf 00:02:42.601 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.601 SPDK_TEST_NVMF=1 00:02:42.601 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:42.601 SPDK_TEST_URING=1 00:02:42.601 SPDK_TEST_USDT=1 00:02:42.601 SPDK_RUN_UBSAN=1 00:02:42.601 NET_TYPE=virt 00:02:42.601 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.607 RUN_NIGHTLY=0 00:02:42.609 [Pipeline] } 00:02:42.625 [Pipeline] // stage 00:02:42.642 [Pipeline] stage 00:02:42.644 [Pipeline] { (Run VM) 00:02:42.657 [Pipeline] sh 00:02:42.948 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:42.949 + echo 'Start stage prepare_nvme.sh' 00:02:42.949 Start stage prepare_nvme.sh 00:02:42.949 + [[ -n 2 ]] 00:02:42.949 + disk_prefix=ex2 00:02:42.949 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:42.949 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:42.949 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:42.949 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.949 ++ SPDK_TEST_NVMF=1 00:02:42.949 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:42.949 ++ SPDK_TEST_URING=1 00:02:42.949 ++ SPDK_TEST_USDT=1 00:02:42.949 ++ SPDK_RUN_UBSAN=1 00:02:42.949 ++ NET_TYPE=virt 00:02:42.949 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.949 ++ RUN_NIGHTLY=0 00:02:42.949 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:42.949 + nvme_files=() 00:02:42.949 + declare -A nvme_files 00:02:42.949 + backend_dir=/var/lib/libvirt/images/backends 00:02:42.949 + nvme_files['nvme.img']=5G 00:02:42.949 + nvme_files['nvme-cmb.img']=5G 00:02:42.949 + nvme_files['nvme-multi0.img']=4G 00:02:42.949 + nvme_files['nvme-multi1.img']=4G 00:02:42.949 + nvme_files['nvme-multi2.img']=4G 00:02:42.949 + nvme_files['nvme-openstack.img']=8G 00:02:42.949 + nvme_files['nvme-zns.img']=5G 00:02:42.949 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:42.949 + (( SPDK_TEST_FTL == 1 )) 00:02:42.949 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:42.949 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:42.949 + for nvme in "${!nvme_files[@]}" 00:02:42.949 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:02:42.949 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:42.949 + for nvme in "${!nvme_files[@]}" 00:02:42.949 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:02:42.949 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:42.949 + for nvme in "${!nvme_files[@]}" 00:02:42.949 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:02:42.949 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:42.949 + for nvme in "${!nvme_files[@]}" 00:02:42.949 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:02:42.949 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:42.949 + for nvme in "${!nvme_files[@]}" 00:02:42.949 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:02:43.209 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:43.209 + for nvme in "${!nvme_files[@]}" 00:02:43.209 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:02:43.209 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:43.209 + for nvme in "${!nvme_files[@]}" 00:02:43.209 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:02:43.209 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:43.209 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:02:43.209 + echo 'End stage prepare_nvme.sh' 00:02:43.209 End stage prepare_nvme.sh 00:02:43.236 [Pipeline] sh 00:02:43.536 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:43.536 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:02:43.536 00:02:43.536 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:43.536 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:43.536 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:43.536 HELP=0 00:02:43.536 DRY_RUN=0 00:02:43.536 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:02:43.536 NVME_DISKS_TYPE=nvme,nvme, 00:02:43.536 NVME_AUTO_CREATE=0 00:02:43.536 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:02:43.536 NVME_CMB=,, 00:02:43.536 NVME_PMR=,, 00:02:43.536 NVME_ZNS=,, 00:02:43.536 NVME_MS=,, 00:02:43.536 NVME_FDP=,, 00:02:43.536 SPDK_VAGRANT_DISTRO=fedora39 00:02:43.536 SPDK_VAGRANT_VMCPU=10 00:02:43.536 SPDK_VAGRANT_VMRAM=12288 00:02:43.536 SPDK_VAGRANT_PROVIDER=libvirt 00:02:43.536 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:43.536 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:43.536 SPDK_OPENSTACK_NETWORK=0 00:02:43.536 VAGRANT_PACKAGE_BOX=0 00:02:43.536 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:43.536 FORCE_DISTRO=true 00:02:43.536 VAGRANT_BOX_VERSION= 00:02:43.536 EXTRA_VAGRANTFILES= 00:02:43.536 NIC_MODEL=e1000 00:02:43.536 00:02:43.536 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:43.536 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:46.861 Bringing machine 'default' up with 'libvirt' provider... 00:02:47.426 ==> default: Creating image (snapshot of base box volume). 00:02:47.684 ==> default: Creating domain with the following settings... 00:02:47.684 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733716349_6d46226b4208e44191ba 00:02:47.684 ==> default: -- Domain type: kvm 00:02:47.684 ==> default: -- Cpus: 10 00:02:47.684 ==> default: -- Feature: acpi 00:02:47.684 ==> default: -- Feature: apic 00:02:47.684 ==> default: -- Feature: pae 00:02:47.684 ==> default: -- Memory: 12288M 00:02:47.684 ==> default: -- Memory Backing: hugepages: 00:02:47.684 ==> default: -- Management MAC: 00:02:47.684 ==> default: -- Loader: 00:02:47.684 ==> default: -- Nvram: 00:02:47.684 ==> default: -- Base box: spdk/fedora39 00:02:47.684 ==> default: -- Storage pool: default 00:02:47.684 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733716349_6d46226b4208e44191ba.img (20G) 00:02:47.684 ==> default: -- Volume Cache: default 00:02:47.684 ==> default: -- Kernel: 00:02:47.684 ==> default: -- Initrd: 00:02:47.684 ==> default: -- Graphics Type: vnc 00:02:47.684 ==> default: -- Graphics Port: -1 00:02:47.684 ==> default: -- Graphics IP: 127.0.0.1 00:02:47.684 ==> default: -- Graphics Password: Not defined 00:02:47.684 ==> default: -- Video Type: cirrus 00:02:47.684 ==> default: -- Video VRAM: 9216 00:02:47.684 ==> default: -- Sound Type: 00:02:47.684 ==> default: -- Keymap: en-us 00:02:47.684 ==> default: -- TPM Path: 00:02:47.684 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:47.684 ==> default: -- Command line args: 00:02:47.684 ==> default: -> value=-device, 00:02:47.684 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:47.684 ==> default: -> value=-drive, 00:02:47.684 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:02:47.684 ==> default: -> value=-device, 00:02:47.684 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:47.684 ==> default: -> value=-device, 00:02:47.684 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:47.684 ==> default: -> value=-drive, 00:02:47.684 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:47.684 ==> default: -> value=-device, 00:02:47.684 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:47.684 ==> default: -> value=-drive, 00:02:47.684 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:47.684 ==> default: -> value=-device, 00:02:47.684 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:47.684 ==> default: -> value=-drive, 00:02:47.684 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:47.684 ==> default: -> value=-device, 00:02:47.684 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:47.941 ==> default: Creating shared folders metadata... 00:02:47.941 ==> default: Starting domain. 00:02:49.841 ==> default: Waiting for domain to get an IP address... 00:03:11.876 ==> default: Waiting for SSH to become available... 00:03:11.876 ==> default: Configuring and enabling network interfaces... 00:03:15.159 default: SSH address: 192.168.121.69:22 00:03:15.159 default: SSH username: vagrant 00:03:15.159 default: SSH auth method: private key 00:03:17.690 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:25.820 ==> default: Mounting SSHFS shared folder... 00:03:27.195 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:27.195 ==> default: Checking Mount.. 00:03:28.570 ==> default: Folder Successfully Mounted! 00:03:28.570 ==> default: Running provisioner: file... 00:03:29.137 default: ~/.gitconfig => .gitconfig 00:03:29.737 00:03:29.737 SUCCESS! 00:03:29.737 00:03:29.737 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:29.737 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:29.737 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:29.737 00:03:29.745 [Pipeline] } 00:03:29.761 [Pipeline] // stage 00:03:29.770 [Pipeline] dir 00:03:29.771 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:03:29.773 [Pipeline] { 00:03:29.786 [Pipeline] catchError 00:03:29.787 [Pipeline] { 00:03:29.799 [Pipeline] sh 00:03:30.076 + vagrant ssh-config --host vagrant 00:03:30.076 + sed -ne /^Host/,$p 00:03:30.076 + tee ssh_conf 00:03:33.353 Host vagrant 00:03:33.353 HostName 192.168.121.69 00:03:33.354 User vagrant 00:03:33.354 Port 22 00:03:33.354 UserKnownHostsFile /dev/null 00:03:33.354 StrictHostKeyChecking no 00:03:33.354 PasswordAuthentication no 00:03:33.354 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:33.354 IdentitiesOnly yes 00:03:33.354 LogLevel FATAL 00:03:33.354 ForwardAgent yes 00:03:33.354 ForwardX11 yes 00:03:33.354 00:03:33.364 [Pipeline] withEnv 00:03:33.366 [Pipeline] { 00:03:33.389 [Pipeline] sh 00:03:33.662 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:33.662 source /etc/os-release 00:03:33.662 [[ -e /image.version ]] && img=$(< /image.version) 00:03:33.662 # Minimal, systemd-like check. 00:03:33.662 if [[ -e /.dockerenv ]]; then 00:03:33.662 # Clear garbage from the node's name: 00:03:33.662 # agt-er_autotest_547-896 -> autotest_547-896 00:03:33.662 # $HOSTNAME is the actual container id 00:03:33.662 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:33.662 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:33.662 # We can assume this is a mount from a host where container is running, 00:03:33.662 # so fetch its hostname to easily identify the target swarm worker. 00:03:33.662 container="$(< /etc/hostname) ($agent)" 00:03:33.662 else 00:03:33.662 # Fallback 00:03:33.662 container=$agent 00:03:33.662 fi 00:03:33.662 fi 00:03:33.662 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:33.662 00:03:33.928 [Pipeline] } 00:03:33.941 [Pipeline] // withEnv 00:03:33.947 [Pipeline] setCustomBuildProperty 00:03:33.957 [Pipeline] stage 00:03:33.959 [Pipeline] { (Tests) 00:03:33.972 [Pipeline] sh 00:03:34.245 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:34.514 [Pipeline] sh 00:03:34.789 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:35.058 [Pipeline] timeout 00:03:35.058 Timeout set to expire in 1 hr 0 min 00:03:35.060 [Pipeline] { 00:03:35.074 [Pipeline] sh 00:03:35.350 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:35.914 HEAD is now at 5f032e8b7 lib/reduce: Write Zero to partial chunk when unmapping the chunks. 00:03:35.922 [Pipeline] sh 00:03:36.196 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:36.468 [Pipeline] sh 00:03:36.745 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:37.017 [Pipeline] sh 00:03:37.294 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:37.551 ++ readlink -f spdk_repo 00:03:37.551 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:37.551 + [[ -n /home/vagrant/spdk_repo ]] 00:03:37.551 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:37.551 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:37.551 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:37.551 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:37.551 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:37.551 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:37.551 + cd /home/vagrant/spdk_repo 00:03:37.551 + source /etc/os-release 00:03:37.551 ++ NAME='Fedora Linux' 00:03:37.551 ++ VERSION='39 (Cloud Edition)' 00:03:37.551 ++ ID=fedora 00:03:37.551 ++ VERSION_ID=39 00:03:37.551 ++ VERSION_CODENAME= 00:03:37.551 ++ PLATFORM_ID=platform:f39 00:03:37.551 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:37.551 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:37.551 ++ LOGO=fedora-logo-icon 00:03:37.551 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:37.551 ++ HOME_URL=https://fedoraproject.org/ 00:03:37.551 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:37.551 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:37.551 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:37.551 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:37.551 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:37.551 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:37.551 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:37.551 ++ SUPPORT_END=2024-11-12 00:03:37.551 ++ VARIANT='Cloud Edition' 00:03:37.551 ++ VARIANT_ID=cloud 00:03:37.551 + uname -a 00:03:37.551 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:37.551 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:38.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.116 Hugepages 00:03:38.116 node hugesize free / total 00:03:38.116 node0 1048576kB 0 / 0 00:03:38.116 node0 2048kB 0 / 0 00:03:38.116 00:03:38.116 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.116 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:38.116 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:38.116 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:38.116 + rm -f /tmp/spdk-ld-path 00:03:38.116 + source autorun-spdk.conf 00:03:38.116 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:38.116 ++ SPDK_TEST_NVMF=1 00:03:38.116 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:38.116 ++ SPDK_TEST_URING=1 00:03:38.116 ++ SPDK_TEST_USDT=1 00:03:38.116 ++ SPDK_RUN_UBSAN=1 00:03:38.116 ++ NET_TYPE=virt 00:03:38.116 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:38.116 ++ RUN_NIGHTLY=0 00:03:38.116 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:38.116 + [[ -n '' ]] 00:03:38.116 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:38.116 + for M in /var/spdk/build-*-manifest.txt 00:03:38.116 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:38.116 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:38.116 + for M in /var/spdk/build-*-manifest.txt 00:03:38.116 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:38.116 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:38.116 + for M in /var/spdk/build-*-manifest.txt 00:03:38.116 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:38.116 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:38.116 ++ uname 00:03:38.116 + [[ Linux == \L\i\n\u\x ]] 00:03:38.116 + sudo dmesg -T 00:03:38.116 + sudo dmesg --clear 00:03:38.116 + dmesg_pid=5252 00:03:38.116 + [[ Fedora Linux == FreeBSD ]] 00:03:38.116 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:38.116 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:38.116 + sudo dmesg -Tw 00:03:38.116 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:38.116 + [[ -x /usr/src/fio-static/fio ]] 00:03:38.116 + export FIO_BIN=/usr/src/fio-static/fio 00:03:38.116 + FIO_BIN=/usr/src/fio-static/fio 00:03:38.116 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:38.116 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:38.116 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:38.116 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:38.116 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:38.116 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:38.116 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:38.116 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:38.116 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:38.373 03:53:20 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:38.373 03:53:20 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:38.373 03:53:20 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:38.373 03:53:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:38.373 03:53:20 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:38.373 03:53:20 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:38.373 03:53:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:38.373 03:53:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:38.373 03:53:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:38.373 03:53:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:38.373 03:53:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:38.373 03:53:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.373 03:53:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.373 03:53:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.373 03:53:20 -- paths/export.sh@5 -- $ export PATH 00:03:38.373 03:53:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:38.373 03:53:20 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:38.373 03:53:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:38.373 03:53:20 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733716400.XXXXXX 00:03:38.373 03:53:20 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733716400.8seFqU 00:03:38.373 03:53:20 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:38.373 03:53:20 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:38.373 03:53:20 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:38.373 03:53:20 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:38.373 03:53:20 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:38.373 03:53:20 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:38.373 03:53:20 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:38.373 03:53:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:38.373 03:53:20 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:03:38.373 03:53:20 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:38.373 03:53:20 -- pm/common@17 -- $ local monitor 00:03:38.373 03:53:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.373 03:53:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:38.373 03:53:20 -- pm/common@25 -- $ sleep 1 00:03:38.373 03:53:20 -- pm/common@21 -- $ date +%s 00:03:38.373 03:53:20 -- pm/common@21 -- $ date +%s 00:03:38.373 03:53:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733716400 00:03:38.373 03:53:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733716400 00:03:38.373 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733716400_collect-vmstat.pm.log 00:03:38.373 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733716400_collect-cpu-load.pm.log 00:03:39.303 03:53:21 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:39.303 03:53:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:39.303 03:53:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:39.303 03:53:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:39.303 03:53:21 -- spdk/autobuild.sh@16 -- $ date -u 00:03:39.303 Mon Dec 9 03:53:21 AM UTC 2024 00:03:39.303 03:53:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:39.303 v25.01-pre-304-g5f032e8b7 00:03:39.303 03:53:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:39.303 03:53:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:39.303 03:53:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:39.303 03:53:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:39.303 03:53:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:39.303 03:53:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:39.303 ************************************ 00:03:39.303 START TEST ubsan 00:03:39.303 ************************************ 00:03:39.303 using ubsan 00:03:39.303 03:53:21 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:39.303 00:03:39.303 real 0m0.000s 00:03:39.303 user 0m0.000s 00:03:39.303 sys 0m0.000s 00:03:39.303 03:53:21 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:39.303 ************************************ 00:03:39.303 END TEST ubsan 00:03:39.303 03:53:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:39.303 ************************************ 00:03:39.560 03:53:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:39.560 03:53:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:39.560 03:53:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:39.560 03:53:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:39.560 03:53:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:39.560 03:53:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:39.560 03:53:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:39.560 03:53:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:39.560 03:53:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:03:39.560 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:39.560 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:40.122 Using 'verbs' RDMA provider 00:03:55.979 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:08.211 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:08.211 Creating mk/config.mk...done. 00:04:08.211 Creating mk/cc.flags.mk...done. 00:04:08.211 Type 'make' to build. 00:04:08.211 03:53:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:08.211 03:53:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:08.211 03:53:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:08.211 03:53:49 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.211 ************************************ 00:04:08.211 START TEST make 00:04:08.211 ************************************ 00:04:08.211 03:53:49 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:08.211 make[1]: Nothing to be done for 'all'. 00:04:23.214 The Meson build system 00:04:23.214 Version: 1.5.0 00:04:23.214 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:23.214 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:23.214 Build type: native build 00:04:23.214 Program cat found: YES (/usr/bin/cat) 00:04:23.214 Project name: DPDK 00:04:23.214 Project version: 24.03.0 00:04:23.214 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:23.214 C linker for the host machine: cc ld.bfd 2.40-14 00:04:23.214 Host machine cpu family: x86_64 00:04:23.214 Host machine cpu: x86_64 00:04:23.214 Message: ## Building in Developer Mode ## 00:04:23.214 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:23.214 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:23.214 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:23.214 Program python3 found: YES (/usr/bin/python3) 00:04:23.214 Program cat found: YES (/usr/bin/cat) 00:04:23.214 Compiler for C supports arguments -march=native: YES 00:04:23.214 Checking for size of "void *" : 8 00:04:23.214 Checking for size of "void *" : 8 (cached) 00:04:23.214 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:23.214 Library m found: YES 00:04:23.214 Library numa found: YES 00:04:23.214 Has header "numaif.h" : YES 00:04:23.214 Library fdt found: NO 00:04:23.214 Library execinfo found: NO 00:04:23.214 Has header "execinfo.h" : YES 00:04:23.214 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:23.214 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:23.215 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:23.215 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:23.215 Run-time dependency openssl found: YES 3.1.1 00:04:23.215 Run-time dependency libpcap found: YES 1.10.4 00:04:23.215 Has header "pcap.h" with dependency libpcap: YES 00:04:23.215 Compiler for C supports arguments -Wcast-qual: YES 00:04:23.215 Compiler for C supports arguments -Wdeprecated: YES 00:04:23.215 Compiler for C supports arguments -Wformat: YES 00:04:23.215 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:23.215 Compiler for C supports arguments -Wformat-security: NO 00:04:23.215 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:23.215 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:23.215 Compiler for C supports arguments -Wnested-externs: YES 00:04:23.215 Compiler for C supports arguments -Wold-style-definition: YES 00:04:23.215 Compiler for C supports arguments -Wpointer-arith: YES 00:04:23.215 Compiler for C supports arguments -Wsign-compare: YES 00:04:23.215 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:23.215 Compiler for C supports arguments -Wundef: YES 00:04:23.215 Compiler for C supports arguments -Wwrite-strings: YES 00:04:23.215 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:23.215 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:23.215 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:23.215 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:23.215 Program objdump found: YES (/usr/bin/objdump) 00:04:23.215 Compiler for C supports arguments -mavx512f: YES 00:04:23.215 Checking if "AVX512 checking" compiles: YES 00:04:23.215 Fetching value of define "__SSE4_2__" : 1 00:04:23.215 Fetching value of define "__AES__" : 1 00:04:23.215 Fetching value of define "__AVX__" : 1 00:04:23.215 Fetching value of define "__AVX2__" : 1 00:04:23.215 Fetching value of define "__AVX512BW__" : (undefined) 00:04:23.215 Fetching value of define "__AVX512CD__" : (undefined) 00:04:23.215 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:23.215 Fetching value of define "__AVX512F__" : (undefined) 00:04:23.215 Fetching value of define "__AVX512VL__" : (undefined) 00:04:23.215 Fetching value of define "__PCLMUL__" : 1 00:04:23.215 Fetching value of define "__RDRND__" : 1 00:04:23.215 Fetching value of define "__RDSEED__" : 1 00:04:23.215 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:23.215 Fetching value of define "__znver1__" : (undefined) 00:04:23.215 Fetching value of define "__znver2__" : (undefined) 00:04:23.215 Fetching value of define "__znver3__" : (undefined) 00:04:23.215 Fetching value of define "__znver4__" : (undefined) 00:04:23.215 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:23.215 Message: lib/log: Defining dependency "log" 00:04:23.215 Message: lib/kvargs: Defining dependency "kvargs" 00:04:23.215 Message: lib/telemetry: Defining dependency "telemetry" 00:04:23.215 Checking for function "getentropy" : NO 00:04:23.215 Message: lib/eal: Defining dependency "eal" 00:04:23.215 Message: lib/ring: Defining dependency "ring" 00:04:23.215 Message: lib/rcu: Defining dependency "rcu" 00:04:23.215 Message: lib/mempool: Defining dependency "mempool" 00:04:23.215 Message: lib/mbuf: Defining dependency "mbuf" 00:04:23.215 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:23.215 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:23.215 Compiler for C supports arguments -mpclmul: YES 00:04:23.215 Compiler for C supports arguments -maes: YES 00:04:23.215 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:23.215 Compiler for C supports arguments -mavx512bw: YES 00:04:23.215 Compiler for C supports arguments -mavx512dq: YES 00:04:23.215 Compiler for C supports arguments -mavx512vl: YES 00:04:23.215 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:23.215 Compiler for C supports arguments -mavx2: YES 00:04:23.215 Compiler for C supports arguments -mavx: YES 00:04:23.215 Message: lib/net: Defining dependency "net" 00:04:23.215 Message: lib/meter: Defining dependency "meter" 00:04:23.215 Message: lib/ethdev: Defining dependency "ethdev" 00:04:23.215 Message: lib/pci: Defining dependency "pci" 00:04:23.215 Message: lib/cmdline: Defining dependency "cmdline" 00:04:23.215 Message: lib/hash: Defining dependency "hash" 00:04:23.215 Message: lib/timer: Defining dependency "timer" 00:04:23.215 Message: lib/compressdev: Defining dependency "compressdev" 00:04:23.215 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:23.215 Message: lib/dmadev: Defining dependency "dmadev" 00:04:23.215 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:23.215 Message: lib/power: Defining dependency "power" 00:04:23.215 Message: lib/reorder: Defining dependency "reorder" 00:04:23.215 Message: lib/security: Defining dependency "security" 00:04:23.215 Has header "linux/userfaultfd.h" : YES 00:04:23.215 Has header "linux/vduse.h" : YES 00:04:23.215 Message: lib/vhost: Defining dependency "vhost" 00:04:23.215 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:23.215 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:23.215 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:23.215 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:23.215 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:23.215 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:23.215 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:23.215 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:23.215 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:23.215 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:23.215 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:23.215 Configuring doxy-api-html.conf using configuration 00:04:23.215 Configuring doxy-api-man.conf using configuration 00:04:23.215 Program mandb found: YES (/usr/bin/mandb) 00:04:23.215 Program sphinx-build found: NO 00:04:23.215 Configuring rte_build_config.h using configuration 00:04:23.215 Message: 00:04:23.215 ================= 00:04:23.215 Applications Enabled 00:04:23.215 ================= 00:04:23.215 00:04:23.215 apps: 00:04:23.215 00:04:23.215 00:04:23.215 Message: 00:04:23.215 ================= 00:04:23.215 Libraries Enabled 00:04:23.215 ================= 00:04:23.215 00:04:23.215 libs: 00:04:23.215 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:23.215 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:23.215 cryptodev, dmadev, power, reorder, security, vhost, 00:04:23.215 00:04:23.215 Message: 00:04:23.215 =============== 00:04:23.215 Drivers Enabled 00:04:23.215 =============== 00:04:23.215 00:04:23.215 common: 00:04:23.215 00:04:23.215 bus: 00:04:23.215 pci, vdev, 00:04:23.215 mempool: 00:04:23.215 ring, 00:04:23.215 dma: 00:04:23.215 00:04:23.215 net: 00:04:23.215 00:04:23.215 crypto: 00:04:23.215 00:04:23.215 compress: 00:04:23.215 00:04:23.215 vdpa: 00:04:23.215 00:04:23.215 00:04:23.215 Message: 00:04:23.215 ================= 00:04:23.216 Content Skipped 00:04:23.216 ================= 00:04:23.216 00:04:23.216 apps: 00:04:23.216 dumpcap: explicitly disabled via build config 00:04:23.216 graph: explicitly disabled via build config 00:04:23.216 pdump: explicitly disabled via build config 00:04:23.216 proc-info: explicitly disabled via build config 00:04:23.216 test-acl: explicitly disabled via build config 00:04:23.216 test-bbdev: explicitly disabled via build config 00:04:23.216 test-cmdline: explicitly disabled via build config 00:04:23.216 test-compress-perf: explicitly disabled via build config 00:04:23.216 test-crypto-perf: explicitly disabled via build config 00:04:23.216 test-dma-perf: explicitly disabled via build config 00:04:23.216 test-eventdev: explicitly disabled via build config 00:04:23.216 test-fib: explicitly disabled via build config 00:04:23.216 test-flow-perf: explicitly disabled via build config 00:04:23.216 test-gpudev: explicitly disabled via build config 00:04:23.216 test-mldev: explicitly disabled via build config 00:04:23.216 test-pipeline: explicitly disabled via build config 00:04:23.216 test-pmd: explicitly disabled via build config 00:04:23.216 test-regex: explicitly disabled via build config 00:04:23.216 test-sad: explicitly disabled via build config 00:04:23.216 test-security-perf: explicitly disabled via build config 00:04:23.216 00:04:23.216 libs: 00:04:23.216 argparse: explicitly disabled via build config 00:04:23.216 metrics: explicitly disabled via build config 00:04:23.216 acl: explicitly disabled via build config 00:04:23.216 bbdev: explicitly disabled via build config 00:04:23.216 bitratestats: explicitly disabled via build config 00:04:23.216 bpf: explicitly disabled via build config 00:04:23.216 cfgfile: explicitly disabled via build config 00:04:23.216 distributor: explicitly disabled via build config 00:04:23.216 efd: explicitly disabled via build config 00:04:23.216 eventdev: explicitly disabled via build config 00:04:23.216 dispatcher: explicitly disabled via build config 00:04:23.216 gpudev: explicitly disabled via build config 00:04:23.216 gro: explicitly disabled via build config 00:04:23.216 gso: explicitly disabled via build config 00:04:23.216 ip_frag: explicitly disabled via build config 00:04:23.216 jobstats: explicitly disabled via build config 00:04:23.216 latencystats: explicitly disabled via build config 00:04:23.216 lpm: explicitly disabled via build config 00:04:23.216 member: explicitly disabled via build config 00:04:23.216 pcapng: explicitly disabled via build config 00:04:23.216 rawdev: explicitly disabled via build config 00:04:23.216 regexdev: explicitly disabled via build config 00:04:23.216 mldev: explicitly disabled via build config 00:04:23.216 rib: explicitly disabled via build config 00:04:23.216 sched: explicitly disabled via build config 00:04:23.216 stack: explicitly disabled via build config 00:04:23.216 ipsec: explicitly disabled via build config 00:04:23.216 pdcp: explicitly disabled via build config 00:04:23.216 fib: explicitly disabled via build config 00:04:23.216 port: explicitly disabled via build config 00:04:23.216 pdump: explicitly disabled via build config 00:04:23.216 table: explicitly disabled via build config 00:04:23.216 pipeline: explicitly disabled via build config 00:04:23.216 graph: explicitly disabled via build config 00:04:23.216 node: explicitly disabled via build config 00:04:23.216 00:04:23.216 drivers: 00:04:23.216 common/cpt: not in enabled drivers build config 00:04:23.216 common/dpaax: not in enabled drivers build config 00:04:23.216 common/iavf: not in enabled drivers build config 00:04:23.216 common/idpf: not in enabled drivers build config 00:04:23.216 common/ionic: not in enabled drivers build config 00:04:23.216 common/mvep: not in enabled drivers build config 00:04:23.216 common/octeontx: not in enabled drivers build config 00:04:23.216 bus/auxiliary: not in enabled drivers build config 00:04:23.216 bus/cdx: not in enabled drivers build config 00:04:23.216 bus/dpaa: not in enabled drivers build config 00:04:23.216 bus/fslmc: not in enabled drivers build config 00:04:23.216 bus/ifpga: not in enabled drivers build config 00:04:23.216 bus/platform: not in enabled drivers build config 00:04:23.216 bus/uacce: not in enabled drivers build config 00:04:23.216 bus/vmbus: not in enabled drivers build config 00:04:23.216 common/cnxk: not in enabled drivers build config 00:04:23.216 common/mlx5: not in enabled drivers build config 00:04:23.216 common/nfp: not in enabled drivers build config 00:04:23.216 common/nitrox: not in enabled drivers build config 00:04:23.216 common/qat: not in enabled drivers build config 00:04:23.216 common/sfc_efx: not in enabled drivers build config 00:04:23.216 mempool/bucket: not in enabled drivers build config 00:04:23.216 mempool/cnxk: not in enabled drivers build config 00:04:23.216 mempool/dpaa: not in enabled drivers build config 00:04:23.216 mempool/dpaa2: not in enabled drivers build config 00:04:23.216 mempool/octeontx: not in enabled drivers build config 00:04:23.216 mempool/stack: not in enabled drivers build config 00:04:23.216 dma/cnxk: not in enabled drivers build config 00:04:23.216 dma/dpaa: not in enabled drivers build config 00:04:23.216 dma/dpaa2: not in enabled drivers build config 00:04:23.216 dma/hisilicon: not in enabled drivers build config 00:04:23.216 dma/idxd: not in enabled drivers build config 00:04:23.216 dma/ioat: not in enabled drivers build config 00:04:23.216 dma/skeleton: not in enabled drivers build config 00:04:23.216 net/af_packet: not in enabled drivers build config 00:04:23.216 net/af_xdp: not in enabled drivers build config 00:04:23.216 net/ark: not in enabled drivers build config 00:04:23.216 net/atlantic: not in enabled drivers build config 00:04:23.216 net/avp: not in enabled drivers build config 00:04:23.216 net/axgbe: not in enabled drivers build config 00:04:23.216 net/bnx2x: not in enabled drivers build config 00:04:23.216 net/bnxt: not in enabled drivers build config 00:04:23.216 net/bonding: not in enabled drivers build config 00:04:23.216 net/cnxk: not in enabled drivers build config 00:04:23.216 net/cpfl: not in enabled drivers build config 00:04:23.216 net/cxgbe: not in enabled drivers build config 00:04:23.216 net/dpaa: not in enabled drivers build config 00:04:23.216 net/dpaa2: not in enabled drivers build config 00:04:23.216 net/e1000: not in enabled drivers build config 00:04:23.216 net/ena: not in enabled drivers build config 00:04:23.216 net/enetc: not in enabled drivers build config 00:04:23.216 net/enetfec: not in enabled drivers build config 00:04:23.216 net/enic: not in enabled drivers build config 00:04:23.216 net/failsafe: not in enabled drivers build config 00:04:23.216 net/fm10k: not in enabled drivers build config 00:04:23.216 net/gve: not in enabled drivers build config 00:04:23.216 net/hinic: not in enabled drivers build config 00:04:23.216 net/hns3: not in enabled drivers build config 00:04:23.216 net/i40e: not in enabled drivers build config 00:04:23.216 net/iavf: not in enabled drivers build config 00:04:23.216 net/ice: not in enabled drivers build config 00:04:23.216 net/idpf: not in enabled drivers build config 00:04:23.216 net/igc: not in enabled drivers build config 00:04:23.216 net/ionic: not in enabled drivers build config 00:04:23.216 net/ipn3ke: not in enabled drivers build config 00:04:23.216 net/ixgbe: not in enabled drivers build config 00:04:23.216 net/mana: not in enabled drivers build config 00:04:23.216 net/memif: not in enabled drivers build config 00:04:23.216 net/mlx4: not in enabled drivers build config 00:04:23.216 net/mlx5: not in enabled drivers build config 00:04:23.216 net/mvneta: not in enabled drivers build config 00:04:23.216 net/mvpp2: not in enabled drivers build config 00:04:23.216 net/netvsc: not in enabled drivers build config 00:04:23.216 net/nfb: not in enabled drivers build config 00:04:23.216 net/nfp: not in enabled drivers build config 00:04:23.216 net/ngbe: not in enabled drivers build config 00:04:23.216 net/null: not in enabled drivers build config 00:04:23.216 net/octeontx: not in enabled drivers build config 00:04:23.216 net/octeon_ep: not in enabled drivers build config 00:04:23.216 net/pcap: not in enabled drivers build config 00:04:23.216 net/pfe: not in enabled drivers build config 00:04:23.216 net/qede: not in enabled drivers build config 00:04:23.216 net/ring: not in enabled drivers build config 00:04:23.216 net/sfc: not in enabled drivers build config 00:04:23.216 net/softnic: not in enabled drivers build config 00:04:23.216 net/tap: not in enabled drivers build config 00:04:23.216 net/thunderx: not in enabled drivers build config 00:04:23.216 net/txgbe: not in enabled drivers build config 00:04:23.216 net/vdev_netvsc: not in enabled drivers build config 00:04:23.216 net/vhost: not in enabled drivers build config 00:04:23.216 net/virtio: not in enabled drivers build config 00:04:23.216 net/vmxnet3: not in enabled drivers build config 00:04:23.216 raw/*: missing internal dependency, "rawdev" 00:04:23.216 crypto/armv8: not in enabled drivers build config 00:04:23.216 crypto/bcmfs: not in enabled drivers build config 00:04:23.216 crypto/caam_jr: not in enabled drivers build config 00:04:23.216 crypto/ccp: not in enabled drivers build config 00:04:23.216 crypto/cnxk: not in enabled drivers build config 00:04:23.216 crypto/dpaa_sec: not in enabled drivers build config 00:04:23.216 crypto/dpaa2_sec: not in enabled drivers build config 00:04:23.216 crypto/ipsec_mb: not in enabled drivers build config 00:04:23.216 crypto/mlx5: not in enabled drivers build config 00:04:23.216 crypto/mvsam: not in enabled drivers build config 00:04:23.216 crypto/nitrox: not in enabled drivers build config 00:04:23.216 crypto/null: not in enabled drivers build config 00:04:23.216 crypto/octeontx: not in enabled drivers build config 00:04:23.216 crypto/openssl: not in enabled drivers build config 00:04:23.216 crypto/scheduler: not in enabled drivers build config 00:04:23.216 crypto/uadk: not in enabled drivers build config 00:04:23.216 crypto/virtio: not in enabled drivers build config 00:04:23.216 compress/isal: not in enabled drivers build config 00:04:23.216 compress/mlx5: not in enabled drivers build config 00:04:23.216 compress/nitrox: not in enabled drivers build config 00:04:23.216 compress/octeontx: not in enabled drivers build config 00:04:23.216 compress/zlib: not in enabled drivers build config 00:04:23.216 regex/*: missing internal dependency, "regexdev" 00:04:23.216 ml/*: missing internal dependency, "mldev" 00:04:23.216 vdpa/ifc: not in enabled drivers build config 00:04:23.216 vdpa/mlx5: not in enabled drivers build config 00:04:23.216 vdpa/nfp: not in enabled drivers build config 00:04:23.216 vdpa/sfc: not in enabled drivers build config 00:04:23.216 event/*: missing internal dependency, "eventdev" 00:04:23.216 baseband/*: missing internal dependency, "bbdev" 00:04:23.217 gpu/*: missing internal dependency, "gpudev" 00:04:23.217 00:04:23.217 00:04:23.217 Build targets in project: 85 00:04:23.217 00:04:23.217 DPDK 24.03.0 00:04:23.217 00:04:23.217 User defined options 00:04:23.217 buildtype : debug 00:04:23.217 default_library : shared 00:04:23.217 libdir : lib 00:04:23.217 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:23.217 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:23.217 c_link_args : 00:04:23.217 cpu_instruction_set: native 00:04:23.217 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:23.217 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:23.217 enable_docs : false 00:04:23.217 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:23.217 enable_kmods : false 00:04:23.217 max_lcores : 128 00:04:23.217 tests : false 00:04:23.217 00:04:23.217 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:23.475 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:23.733 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:23.733 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:23.733 [3/268] Linking static target lib/librte_kvargs.a 00:04:23.733 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:23.733 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:23.733 [6/268] Linking static target lib/librte_log.a 00:04:24.300 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.300 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:24.300 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:24.300 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:24.300 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:24.561 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:24.561 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:24.819 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:24.819 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:24.819 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.819 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:24.819 [18/268] Linking static target lib/librte_telemetry.a 00:04:24.819 [19/268] Linking target lib/librte_log.so.24.1 00:04:25.076 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:25.076 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:25.333 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:25.590 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:25.590 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:25.590 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:25.590 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:25.590 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:25.590 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:25.590 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:25.848 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:25.848 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:25.848 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.848 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:26.106 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:26.106 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:26.106 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:26.364 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:26.364 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:26.364 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:26.622 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:26.622 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:26.622 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:26.622 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:26.622 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:26.880 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:26.881 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:26.881 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:26.881 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:27.139 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:27.139 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:27.397 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:27.397 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:27.397 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:27.397 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:27.654 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:27.654 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:27.654 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:27.911 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:27.911 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:27.911 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:27.911 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:28.168 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:28.168 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:28.168 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:28.168 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:28.425 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:28.682 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:28.682 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:28.682 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:28.940 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:28.940 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:28.940 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:28.940 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:29.198 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:29.198 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:29.198 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:29.456 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:29.456 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:29.456 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:29.713 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:29.713 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:29.713 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:29.713 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:29.971 [84/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:29.971 [85/268] Linking static target lib/librte_rcu.a 00:04:29.971 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:29.971 [87/268] Linking static target lib/librte_eal.a 00:04:30.229 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:30.229 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:30.229 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:30.487 [91/268] Linking static target lib/librte_ring.a 00:04:30.487 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:30.487 [93/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.746 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:30.746 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:30.746 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:30.746 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:30.746 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:31.005 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:31.005 [100/268] Linking static target lib/librte_mempool.a 00:04:31.005 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:31.005 [102/268] Linking static target lib/librte_mbuf.a 00:04:31.005 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.264 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:31.522 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:31.523 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:31.523 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:31.523 [108/268] Linking static target lib/librte_meter.a 00:04:31.781 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:32.040 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:32.040 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:32.040 [112/268] Linking static target lib/librte_net.a 00:04:32.040 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.040 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:32.298 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:32.298 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.298 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.557 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.557 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:32.815 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:33.073 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:33.330 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:33.330 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:33.587 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:33.587 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:33.587 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:33.587 [127/268] Linking static target lib/librte_pci.a 00:04:33.587 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:33.587 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:33.587 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:33.587 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:33.587 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:33.846 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:33.846 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:33.846 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:33.846 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:34.106 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:34.106 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.106 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:34.106 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:34.106 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:34.106 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:34.106 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:34.364 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:34.364 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:34.364 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:34.364 [147/268] Linking static target lib/librte_ethdev.a 00:04:34.364 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:34.364 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:34.634 [150/268] Linking static target lib/librte_cmdline.a 00:04:34.891 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:34.891 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:35.149 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:35.149 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:35.149 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:35.149 [156/268] Linking static target lib/librte_timer.a 00:04:35.149 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:35.149 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:35.149 [159/268] Linking static target lib/librte_hash.a 00:04:35.730 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:35.730 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:35.730 [162/268] Linking static target lib/librte_compressdev.a 00:04:35.988 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:35.988 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.988 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:35.988 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:36.244 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:36.244 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.244 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:36.244 [170/268] Linking static target lib/librte_dmadev.a 00:04:36.244 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:36.808 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:36.808 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.808 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:36.808 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:36.808 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.808 [177/268] Linking static target lib/librte_cryptodev.a 00:04:36.808 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:37.065 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:37.322 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.322 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:37.322 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:37.322 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:37.322 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:37.888 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:37.889 [186/268] Linking static target lib/librte_power.a 00:04:37.889 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:37.889 [188/268] Linking static target lib/librte_security.a 00:04:38.147 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:38.147 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:38.147 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:38.147 [192/268] Linking static target lib/librte_reorder.a 00:04:38.405 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:38.405 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:38.663 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.922 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.180 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.180 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:39.180 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:39.439 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:39.439 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:39.699 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:39.699 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:39.699 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:39.969 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:39.969 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:40.227 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:40.227 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:40.227 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:40.486 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:40.486 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:40.486 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:40.486 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:40.486 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:40.744 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:40.744 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:40.744 [217/268] Linking static target drivers/librte_bus_vdev.a 00:04:40.744 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:40.745 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:40.745 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:40.745 [221/268] Linking static target drivers/librte_bus_pci.a 00:04:40.745 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:40.745 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:40.745 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:40.745 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:40.745 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:41.003 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.260 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.837 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:42.095 [230/268] Linking static target lib/librte_vhost.a 00:04:42.660 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:42.917 [232/268] Linking target lib/librte_eal.so.24.1 00:04:42.917 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:42.917 [234/268] Linking target lib/librte_ring.so.24.1 00:04:42.917 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:42.918 [236/268] Linking target lib/librte_dmadev.so.24.1 00:04:42.918 [237/268] Linking target lib/librte_pci.so.24.1 00:04:42.918 [238/268] Linking target lib/librte_timer.so.24.1 00:04:42.918 [239/268] Linking target lib/librte_meter.so.24.1 00:04:43.174 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:43.174 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:43.174 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:43.174 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:43.174 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:43.174 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:43.174 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:43.174 [247/268] Linking target lib/librte_mempool.so.24.1 00:04:43.175 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.431 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:43.431 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:43.431 [251/268] Linking target lib/librte_mbuf.so.24.1 00:04:43.431 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:43.431 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.431 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:43.687 [255/268] Linking target lib/librte_reorder.so.24.1 00:04:43.687 [256/268] Linking target lib/librte_compressdev.so.24.1 00:04:43.687 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:04:43.687 [258/268] Linking target lib/librte_net.so.24.1 00:04:43.687 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:43.687 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:43.687 [261/268] Linking target lib/librte_security.so.24.1 00:04:43.687 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:43.687 [263/268] Linking target lib/librte_hash.so.24.1 00:04:43.944 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:43.944 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:43.944 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:43.944 [267/268] Linking target lib/librte_power.so.24.1 00:04:44.201 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:44.201 INFO: autodetecting backend as ninja 00:04:44.201 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:16.286 CC lib/log/log_flags.o 00:05:16.286 CC lib/log/log.o 00:05:16.286 CC lib/log/log_deprecated.o 00:05:16.286 CC lib/ut_mock/mock.o 00:05:16.286 CC lib/ut/ut.o 00:05:16.286 LIB libspdk_ut.a 00:05:16.286 SO libspdk_ut.so.2.0 00:05:16.286 LIB libspdk_ut_mock.a 00:05:16.286 LIB libspdk_log.a 00:05:16.286 SYMLINK libspdk_ut.so 00:05:16.286 SO libspdk_ut_mock.so.6.0 00:05:16.286 SO libspdk_log.so.7.1 00:05:16.286 SYMLINK libspdk_ut_mock.so 00:05:16.286 SYMLINK libspdk_log.so 00:05:16.286 CC lib/dma/dma.o 00:05:16.286 CC lib/ioat/ioat.o 00:05:16.286 CC lib/util/bit_array.o 00:05:16.286 CC lib/util/cpuset.o 00:05:16.286 CC lib/util/base64.o 00:05:16.286 CXX lib/trace_parser/trace.o 00:05:16.286 CC lib/util/crc32.o 00:05:16.286 CC lib/util/crc32c.o 00:05:16.286 CC lib/util/crc16.o 00:05:16.286 CC lib/util/crc32_ieee.o 00:05:16.286 CC lib/vfio_user/host/vfio_user_pci.o 00:05:16.286 CC lib/util/crc64.o 00:05:16.286 CC lib/util/dif.o 00:05:16.286 CC lib/vfio_user/host/vfio_user.o 00:05:16.286 CC lib/util/fd.o 00:05:16.286 LIB libspdk_dma.a 00:05:16.286 SO libspdk_dma.so.5.0 00:05:16.286 CC lib/util/fd_group.o 00:05:16.286 CC lib/util/file.o 00:05:16.286 SYMLINK libspdk_dma.so 00:05:16.286 CC lib/util/hexlify.o 00:05:16.286 LIB libspdk_ioat.a 00:05:16.286 CC lib/util/iov.o 00:05:16.286 SO libspdk_ioat.so.7.0 00:05:16.286 CC lib/util/math.o 00:05:16.286 CC lib/util/net.o 00:05:16.286 SYMLINK libspdk_ioat.so 00:05:16.286 CC lib/util/pipe.o 00:05:16.286 CC lib/util/strerror_tls.o 00:05:16.286 CC lib/util/string.o 00:05:16.286 LIB libspdk_vfio_user.a 00:05:16.286 CC lib/util/uuid.o 00:05:16.286 SO libspdk_vfio_user.so.5.0 00:05:16.286 CC lib/util/xor.o 00:05:16.286 CC lib/util/zipf.o 00:05:16.286 SYMLINK libspdk_vfio_user.so 00:05:16.286 CC lib/util/md5.o 00:05:16.286 LIB libspdk_util.a 00:05:16.286 SO libspdk_util.so.10.1 00:05:16.286 LIB libspdk_trace_parser.a 00:05:16.286 SYMLINK libspdk_util.so 00:05:16.286 SO libspdk_trace_parser.so.6.0 00:05:16.286 SYMLINK libspdk_trace_parser.so 00:05:16.286 CC lib/idxd/idxd.o 00:05:16.286 CC lib/idxd/idxd_user.o 00:05:16.286 CC lib/idxd/idxd_kernel.o 00:05:16.286 CC lib/conf/conf.o 00:05:16.286 CC lib/json/json_parse.o 00:05:16.286 CC lib/json/json_util.o 00:05:16.286 CC lib/vmd/vmd.o 00:05:16.286 CC lib/json/json_write.o 00:05:16.286 CC lib/rdma_utils/rdma_utils.o 00:05:16.286 CC lib/env_dpdk/env.o 00:05:16.543 CC lib/env_dpdk/memory.o 00:05:16.543 LIB libspdk_conf.a 00:05:16.543 CC lib/env_dpdk/pci.o 00:05:16.543 CC lib/vmd/led.o 00:05:16.543 CC lib/env_dpdk/init.o 00:05:16.543 SO libspdk_conf.so.6.0 00:05:16.543 LIB libspdk_rdma_utils.a 00:05:16.800 LIB libspdk_json.a 00:05:16.800 SO libspdk_rdma_utils.so.1.0 00:05:16.800 SYMLINK libspdk_conf.so 00:05:16.800 SO libspdk_json.so.6.0 00:05:16.800 CC lib/env_dpdk/threads.o 00:05:16.800 SYMLINK libspdk_rdma_utils.so 00:05:16.800 CC lib/env_dpdk/pci_ioat.o 00:05:16.800 CC lib/env_dpdk/pci_virtio.o 00:05:16.800 SYMLINK libspdk_json.so 00:05:16.800 CC lib/env_dpdk/pci_vmd.o 00:05:16.800 CC lib/env_dpdk/pci_idxd.o 00:05:17.058 CC lib/env_dpdk/pci_event.o 00:05:17.058 CC lib/env_dpdk/sigbus_handler.o 00:05:17.058 LIB libspdk_idxd.a 00:05:17.058 SO libspdk_idxd.so.12.1 00:05:17.058 LIB libspdk_vmd.a 00:05:17.058 CC lib/env_dpdk/pci_dpdk.o 00:05:17.058 SYMLINK libspdk_idxd.so 00:05:17.058 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:17.058 SO libspdk_vmd.so.6.0 00:05:17.058 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:17.058 CC lib/rdma_provider/common.o 00:05:17.058 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:17.058 SYMLINK libspdk_vmd.so 00:05:17.316 CC lib/jsonrpc/jsonrpc_server.o 00:05:17.316 CC lib/jsonrpc/jsonrpc_client.o 00:05:17.316 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:17.316 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:17.316 LIB libspdk_rdma_provider.a 00:05:17.316 SO libspdk_rdma_provider.so.7.0 00:05:17.573 SYMLINK libspdk_rdma_provider.so 00:05:17.573 LIB libspdk_jsonrpc.a 00:05:17.573 SO libspdk_jsonrpc.so.6.0 00:05:17.573 SYMLINK libspdk_jsonrpc.so 00:05:17.830 CC lib/rpc/rpc.o 00:05:17.830 LIB libspdk_env_dpdk.a 00:05:18.087 SO libspdk_env_dpdk.so.15.1 00:05:18.087 LIB libspdk_rpc.a 00:05:18.087 SO libspdk_rpc.so.6.0 00:05:18.087 SYMLINK libspdk_env_dpdk.so 00:05:18.346 SYMLINK libspdk_rpc.so 00:05:18.603 CC lib/notify/notify.o 00:05:18.603 CC lib/keyring/keyring.o 00:05:18.603 CC lib/notify/notify_rpc.o 00:05:18.603 CC lib/keyring/keyring_rpc.o 00:05:18.603 CC lib/trace/trace_flags.o 00:05:18.603 CC lib/trace/trace_rpc.o 00:05:18.603 CC lib/trace/trace.o 00:05:18.603 LIB libspdk_notify.a 00:05:18.861 SO libspdk_notify.so.6.0 00:05:18.861 LIB libspdk_keyring.a 00:05:18.861 LIB libspdk_trace.a 00:05:18.861 SYMLINK libspdk_notify.so 00:05:18.861 SO libspdk_keyring.so.2.0 00:05:18.861 SO libspdk_trace.so.11.0 00:05:18.861 SYMLINK libspdk_keyring.so 00:05:18.861 SYMLINK libspdk_trace.so 00:05:19.118 CC lib/thread/iobuf.o 00:05:19.118 CC lib/thread/thread.o 00:05:19.118 CC lib/sock/sock.o 00:05:19.118 CC lib/sock/sock_rpc.o 00:05:19.685 LIB libspdk_sock.a 00:05:19.685 SO libspdk_sock.so.10.0 00:05:19.685 SYMLINK libspdk_sock.so 00:05:20.250 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:20.250 CC lib/nvme/nvme_ctrlr.o 00:05:20.250 CC lib/nvme/nvme_fabric.o 00:05:20.250 CC lib/nvme/nvme_ns.o 00:05:20.250 CC lib/nvme/nvme_ns_cmd.o 00:05:20.250 CC lib/nvme/nvme_pcie.o 00:05:20.250 CC lib/nvme/nvme_pcie_common.o 00:05:20.250 CC lib/nvme/nvme_qpair.o 00:05:20.250 CC lib/nvme/nvme.o 00:05:21.186 CC lib/nvme/nvme_quirks.o 00:05:21.186 LIB libspdk_thread.a 00:05:21.186 CC lib/nvme/nvme_transport.o 00:05:21.186 SO libspdk_thread.so.11.0 00:05:21.186 CC lib/nvme/nvme_discovery.o 00:05:21.186 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:21.186 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:21.186 SYMLINK libspdk_thread.so 00:05:21.186 CC lib/nvme/nvme_tcp.o 00:05:21.186 CC lib/nvme/nvme_opal.o 00:05:21.186 CC lib/nvme/nvme_io_msg.o 00:05:21.444 CC lib/nvme/nvme_poll_group.o 00:05:21.702 CC lib/nvme/nvme_zns.o 00:05:21.702 CC lib/nvme/nvme_stubs.o 00:05:21.702 CC lib/accel/accel.o 00:05:21.959 CC lib/blob/blobstore.o 00:05:21.960 CC lib/init/json_config.o 00:05:21.960 CC lib/init/subsystem.o 00:05:22.218 CC lib/virtio/virtio.o 00:05:22.218 CC lib/nvme/nvme_auth.o 00:05:22.218 CC lib/init/subsystem_rpc.o 00:05:22.218 CC lib/fsdev/fsdev.o 00:05:22.476 CC lib/init/rpc.o 00:05:22.476 CC lib/virtio/virtio_vhost_user.o 00:05:22.476 CC lib/virtio/virtio_vfio_user.o 00:05:22.476 CC lib/virtio/virtio_pci.o 00:05:22.733 LIB libspdk_init.a 00:05:22.733 SO libspdk_init.so.6.0 00:05:22.733 CC lib/accel/accel_rpc.o 00:05:22.733 CC lib/nvme/nvme_cuse.o 00:05:22.733 CC lib/fsdev/fsdev_io.o 00:05:22.733 CC lib/nvme/nvme_rdma.o 00:05:22.733 SYMLINK libspdk_init.so 00:05:22.733 CC lib/accel/accel_sw.o 00:05:22.733 LIB libspdk_virtio.a 00:05:22.990 SO libspdk_virtio.so.7.0 00:05:22.990 SYMLINK libspdk_virtio.so 00:05:22.990 CC lib/fsdev/fsdev_rpc.o 00:05:22.990 CC lib/blob/request.o 00:05:22.990 CC lib/blob/zeroes.o 00:05:23.250 LIB libspdk_accel.a 00:05:23.250 CC lib/blob/blob_bs_dev.o 00:05:23.250 LIB libspdk_fsdev.a 00:05:23.250 CC lib/event/app.o 00:05:23.250 SO libspdk_accel.so.16.0 00:05:23.250 SO libspdk_fsdev.so.2.0 00:05:23.250 SYMLINK libspdk_accel.so 00:05:23.250 CC lib/event/reactor.o 00:05:23.250 CC lib/event/log_rpc.o 00:05:23.250 CC lib/event/app_rpc.o 00:05:23.250 SYMLINK libspdk_fsdev.so 00:05:23.508 CC lib/event/scheduler_static.o 00:05:23.508 CC lib/bdev/bdev.o 00:05:23.508 CC lib/bdev/bdev_rpc.o 00:05:23.508 CC lib/bdev/bdev_zone.o 00:05:23.508 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:23.508 CC lib/bdev/part.o 00:05:23.508 CC lib/bdev/scsi_nvme.o 00:05:23.765 LIB libspdk_event.a 00:05:23.765 SO libspdk_event.so.14.0 00:05:24.023 SYMLINK libspdk_event.so 00:05:24.279 LIB libspdk_fuse_dispatcher.a 00:05:24.279 SO libspdk_fuse_dispatcher.so.1.0 00:05:24.279 SYMLINK libspdk_fuse_dispatcher.so 00:05:24.279 LIB libspdk_nvme.a 00:05:24.536 SO libspdk_nvme.so.15.0 00:05:24.795 SYMLINK libspdk_nvme.so 00:05:25.374 LIB libspdk_blob.a 00:05:25.374 SO libspdk_blob.so.12.0 00:05:25.374 SYMLINK libspdk_blob.so 00:05:25.632 CC lib/lvol/lvol.o 00:05:25.632 CC lib/blobfs/blobfs.o 00:05:25.632 CC lib/blobfs/tree.o 00:05:26.567 LIB libspdk_bdev.a 00:05:26.567 SO libspdk_bdev.so.17.0 00:05:26.567 LIB libspdk_blobfs.a 00:05:26.567 SO libspdk_blobfs.so.11.0 00:05:26.567 LIB libspdk_lvol.a 00:05:26.567 SYMLINK libspdk_bdev.so 00:05:26.567 SYMLINK libspdk_blobfs.so 00:05:26.567 SO libspdk_lvol.so.11.0 00:05:26.826 SYMLINK libspdk_lvol.so 00:05:26.826 CC lib/nvmf/ctrlr.o 00:05:26.826 CC lib/nvmf/ctrlr_discovery.o 00:05:26.826 CC lib/nvmf/ctrlr_bdev.o 00:05:26.826 CC lib/ublk/ublk.o 00:05:26.826 CC lib/nvmf/subsystem.o 00:05:26.826 CC lib/ftl/ftl_core.o 00:05:26.826 CC lib/ublk/ublk_rpc.o 00:05:26.826 CC lib/nbd/nbd.o 00:05:26.826 CC lib/nvmf/nvmf.o 00:05:26.826 CC lib/scsi/dev.o 00:05:27.084 CC lib/nvmf/nvmf_rpc.o 00:05:27.084 CC lib/scsi/lun.o 00:05:27.343 CC lib/ftl/ftl_init.o 00:05:27.343 CC lib/nbd/nbd_rpc.o 00:05:27.343 CC lib/nvmf/transport.o 00:05:27.343 LIB libspdk_nbd.a 00:05:27.343 LIB libspdk_ublk.a 00:05:27.602 CC lib/ftl/ftl_layout.o 00:05:27.602 CC lib/scsi/port.o 00:05:27.602 SO libspdk_nbd.so.7.0 00:05:27.602 SO libspdk_ublk.so.3.0 00:05:27.602 SYMLINK libspdk_ublk.so 00:05:27.602 SYMLINK libspdk_nbd.so 00:05:27.602 CC lib/ftl/ftl_debug.o 00:05:27.602 CC lib/scsi/scsi.o 00:05:27.602 CC lib/nvmf/tcp.o 00:05:27.602 CC lib/nvmf/stubs.o 00:05:27.860 CC lib/nvmf/mdns_server.o 00:05:27.860 CC lib/scsi/scsi_bdev.o 00:05:27.860 CC lib/scsi/scsi_pr.o 00:05:27.860 CC lib/ftl/ftl_io.o 00:05:27.860 CC lib/nvmf/rdma.o 00:05:28.119 CC lib/nvmf/auth.o 00:05:28.119 CC lib/ftl/ftl_sb.o 00:05:28.119 CC lib/scsi/scsi_rpc.o 00:05:28.119 CC lib/scsi/task.o 00:05:28.119 CC lib/ftl/ftl_l2p.o 00:05:28.377 CC lib/ftl/ftl_l2p_flat.o 00:05:28.377 CC lib/ftl/ftl_nv_cache.o 00:05:28.377 CC lib/ftl/ftl_band.o 00:05:28.377 CC lib/ftl/ftl_band_ops.o 00:05:28.377 LIB libspdk_scsi.a 00:05:28.377 CC lib/ftl/ftl_writer.o 00:05:28.377 SO libspdk_scsi.so.9.0 00:05:28.635 CC lib/ftl/ftl_rq.o 00:05:28.635 SYMLINK libspdk_scsi.so 00:05:28.635 CC lib/ftl/ftl_reloc.o 00:05:28.635 CC lib/ftl/ftl_l2p_cache.o 00:05:28.635 CC lib/ftl/ftl_p2l.o 00:05:28.635 CC lib/ftl/ftl_p2l_log.o 00:05:28.635 CC lib/ftl/mngt/ftl_mngt.o 00:05:28.893 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:28.893 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:28.893 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:29.152 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:29.152 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:29.152 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:29.152 CC lib/iscsi/conn.o 00:05:29.152 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:29.409 CC lib/iscsi/init_grp.o 00:05:29.409 CC lib/iscsi/iscsi.o 00:05:29.409 CC lib/iscsi/param.o 00:05:29.409 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:29.409 CC lib/vhost/vhost.o 00:05:29.409 CC lib/vhost/vhost_rpc.o 00:05:29.409 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:29.409 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:29.667 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:29.667 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:29.667 CC lib/ftl/utils/ftl_conf.o 00:05:29.667 CC lib/iscsi/portal_grp.o 00:05:29.667 CC lib/ftl/utils/ftl_md.o 00:05:29.925 CC lib/iscsi/tgt_node.o 00:05:29.925 CC lib/iscsi/iscsi_subsystem.o 00:05:29.925 CC lib/iscsi/iscsi_rpc.o 00:05:29.925 CC lib/iscsi/task.o 00:05:30.183 CC lib/ftl/utils/ftl_mempool.o 00:05:30.183 LIB libspdk_nvmf.a 00:05:30.183 CC lib/vhost/vhost_scsi.o 00:05:30.183 CC lib/ftl/utils/ftl_bitmap.o 00:05:30.183 SO libspdk_nvmf.so.20.0 00:05:30.442 CC lib/vhost/vhost_blk.o 00:05:30.442 CC lib/vhost/rte_vhost_user.o 00:05:30.442 CC lib/ftl/utils/ftl_property.o 00:05:30.442 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:30.442 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:30.442 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:30.442 SYMLINK libspdk_nvmf.so 00:05:30.442 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:30.442 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:30.699 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:30.699 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:30.699 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:30.699 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:30.699 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:30.699 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:30.957 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:30.957 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:30.957 CC lib/ftl/base/ftl_base_dev.o 00:05:30.957 CC lib/ftl/base/ftl_base_bdev.o 00:05:30.957 LIB libspdk_iscsi.a 00:05:30.957 CC lib/ftl/ftl_trace.o 00:05:31.215 SO libspdk_iscsi.so.8.0 00:05:31.215 SYMLINK libspdk_iscsi.so 00:05:31.215 LIB libspdk_ftl.a 00:05:31.473 SO libspdk_ftl.so.9.0 00:05:31.731 LIB libspdk_vhost.a 00:05:31.731 SO libspdk_vhost.so.8.0 00:05:31.731 SYMLINK libspdk_vhost.so 00:05:31.989 SYMLINK libspdk_ftl.so 00:05:32.248 CC module/env_dpdk/env_dpdk_rpc.o 00:05:32.248 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:32.248 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:32.248 CC module/blob/bdev/blob_bdev.o 00:05:32.248 CC module/sock/posix/posix.o 00:05:32.248 CC module/accel/error/accel_error.o 00:05:32.248 CC module/scheduler/gscheduler/gscheduler.o 00:05:32.248 CC module/fsdev/aio/fsdev_aio.o 00:05:32.248 CC module/keyring/file/keyring.o 00:05:32.248 CC module/accel/ioat/accel_ioat.o 00:05:32.507 LIB libspdk_env_dpdk_rpc.a 00:05:32.507 SO libspdk_env_dpdk_rpc.so.6.0 00:05:32.507 SYMLINK libspdk_env_dpdk_rpc.so 00:05:32.507 LIB libspdk_scheduler_gscheduler.a 00:05:32.507 CC module/accel/ioat/accel_ioat_rpc.o 00:05:32.507 LIB libspdk_scheduler_dpdk_governor.a 00:05:32.507 CC module/keyring/file/keyring_rpc.o 00:05:32.507 SO libspdk_scheduler_gscheduler.so.4.0 00:05:32.507 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:32.507 LIB libspdk_scheduler_dynamic.a 00:05:32.507 CC module/accel/error/accel_error_rpc.o 00:05:32.507 SO libspdk_scheduler_dynamic.so.4.0 00:05:32.507 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:32.507 SYMLINK libspdk_scheduler_gscheduler.so 00:05:32.766 LIB libspdk_blob_bdev.a 00:05:32.766 SYMLINK libspdk_scheduler_dynamic.so 00:05:32.766 LIB libspdk_accel_ioat.a 00:05:32.766 SO libspdk_blob_bdev.so.12.0 00:05:32.766 SO libspdk_accel_ioat.so.6.0 00:05:32.766 LIB libspdk_keyring_file.a 00:05:32.766 SO libspdk_keyring_file.so.2.0 00:05:32.766 LIB libspdk_accel_error.a 00:05:32.766 SYMLINK libspdk_blob_bdev.so 00:05:32.766 SYMLINK libspdk_accel_ioat.so 00:05:32.766 SO libspdk_accel_error.so.2.0 00:05:32.766 CC module/sock/uring/uring.o 00:05:32.766 SYMLINK libspdk_keyring_file.so 00:05:32.766 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:32.766 CC module/accel/dsa/accel_dsa.o 00:05:32.766 CC module/keyring/linux/keyring.o 00:05:32.766 SYMLINK libspdk_accel_error.so 00:05:32.766 CC module/fsdev/aio/linux_aio_mgr.o 00:05:32.766 CC module/accel/iaa/accel_iaa.o 00:05:33.024 CC module/accel/iaa/accel_iaa_rpc.o 00:05:33.024 CC module/keyring/linux/keyring_rpc.o 00:05:33.024 CC module/bdev/delay/vbdev_delay.o 00:05:33.024 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:33.024 LIB libspdk_fsdev_aio.a 00:05:33.024 CC module/blobfs/bdev/blobfs_bdev.o 00:05:33.024 LIB libspdk_sock_posix.a 00:05:33.024 SO libspdk_fsdev_aio.so.1.0 00:05:33.024 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:33.282 SO libspdk_sock_posix.so.6.0 00:05:33.282 CC module/accel/dsa/accel_dsa_rpc.o 00:05:33.282 LIB libspdk_keyring_linux.a 00:05:33.282 LIB libspdk_accel_iaa.a 00:05:33.282 SYMLINK libspdk_fsdev_aio.so 00:05:33.282 SO libspdk_keyring_linux.so.1.0 00:05:33.282 SO libspdk_accel_iaa.so.3.0 00:05:33.282 SYMLINK libspdk_sock_posix.so 00:05:33.282 SYMLINK libspdk_keyring_linux.so 00:05:33.282 SYMLINK libspdk_accel_iaa.so 00:05:33.282 LIB libspdk_accel_dsa.a 00:05:33.282 LIB libspdk_blobfs_bdev.a 00:05:33.282 SO libspdk_accel_dsa.so.5.0 00:05:33.282 CC module/bdev/error/vbdev_error.o 00:05:33.282 SO libspdk_blobfs_bdev.so.6.0 00:05:33.539 CC module/bdev/gpt/gpt.o 00:05:33.539 SYMLINK libspdk_accel_dsa.so 00:05:33.539 LIB libspdk_bdev_delay.a 00:05:33.539 CC module/bdev/lvol/vbdev_lvol.o 00:05:33.539 CC module/bdev/error/vbdev_error_rpc.o 00:05:33.539 CC module/bdev/null/bdev_null.o 00:05:33.539 CC module/bdev/malloc/bdev_malloc.o 00:05:33.539 CC module/bdev/nvme/bdev_nvme.o 00:05:33.539 SYMLINK libspdk_blobfs_bdev.so 00:05:33.539 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:33.539 SO libspdk_bdev_delay.so.6.0 00:05:33.539 LIB libspdk_sock_uring.a 00:05:33.539 SO libspdk_sock_uring.so.5.0 00:05:33.539 SYMLINK libspdk_bdev_delay.so 00:05:33.539 CC module/bdev/gpt/vbdev_gpt.o 00:05:33.539 SYMLINK libspdk_sock_uring.so 00:05:33.539 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:33.539 CC module/bdev/nvme/nvme_rpc.o 00:05:33.539 CC module/bdev/nvme/bdev_mdns_client.o 00:05:33.797 CC module/bdev/null/bdev_null_rpc.o 00:05:33.797 LIB libspdk_bdev_error.a 00:05:33.797 SO libspdk_bdev_error.so.6.0 00:05:33.797 SYMLINK libspdk_bdev_error.so 00:05:33.797 LIB libspdk_bdev_malloc.a 00:05:33.797 LIB libspdk_bdev_null.a 00:05:33.797 LIB libspdk_bdev_gpt.a 00:05:33.797 SO libspdk_bdev_malloc.so.6.0 00:05:33.797 SO libspdk_bdev_null.so.6.0 00:05:34.055 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:34.055 SO libspdk_bdev_gpt.so.6.0 00:05:34.055 CC module/bdev/passthru/vbdev_passthru.o 00:05:34.055 SYMLINK libspdk_bdev_null.so 00:05:34.055 CC module/bdev/raid/bdev_raid.o 00:05:34.055 SYMLINK libspdk_bdev_malloc.so 00:05:34.055 SYMLINK libspdk_bdev_gpt.so 00:05:34.055 CC module/bdev/nvme/vbdev_opal.o 00:05:34.055 CC module/bdev/split/vbdev_split.o 00:05:34.055 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:34.055 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:34.055 CC module/bdev/uring/bdev_uring.o 00:05:34.312 CC module/bdev/uring/bdev_uring_rpc.o 00:05:34.313 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:34.313 CC module/bdev/split/vbdev_split_rpc.o 00:05:34.313 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:34.313 CC module/bdev/raid/bdev_raid_rpc.o 00:05:34.313 LIB libspdk_bdev_lvol.a 00:05:34.313 SO libspdk_bdev_lvol.so.6.0 00:05:34.571 SYMLINK libspdk_bdev_lvol.so 00:05:34.571 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:34.571 LIB libspdk_bdev_passthru.a 00:05:34.571 LIB libspdk_bdev_split.a 00:05:34.571 SO libspdk_bdev_passthru.so.6.0 00:05:34.571 CC module/bdev/raid/bdev_raid_sb.o 00:05:34.571 CC module/bdev/raid/raid0.o 00:05:34.571 SO libspdk_bdev_split.so.6.0 00:05:34.571 LIB libspdk_bdev_uring.a 00:05:34.571 CC module/bdev/raid/raid1.o 00:05:34.571 SO libspdk_bdev_uring.so.6.0 00:05:34.571 SYMLINK libspdk_bdev_passthru.so 00:05:34.571 SYMLINK libspdk_bdev_split.so 00:05:34.571 CC module/bdev/raid/concat.o 00:05:34.571 LIB libspdk_bdev_zone_block.a 00:05:34.571 SYMLINK libspdk_bdev_uring.so 00:05:34.571 CC module/bdev/aio/bdev_aio.o 00:05:34.829 SO libspdk_bdev_zone_block.so.6.0 00:05:34.829 SYMLINK libspdk_bdev_zone_block.so 00:05:34.829 CC module/bdev/aio/bdev_aio_rpc.o 00:05:34.829 CC module/bdev/ftl/bdev_ftl.o 00:05:34.829 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:34.829 CC module/bdev/iscsi/bdev_iscsi.o 00:05:34.829 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:35.088 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:35.088 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:35.088 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:35.088 LIB libspdk_bdev_aio.a 00:05:35.088 LIB libspdk_bdev_raid.a 00:05:35.088 SO libspdk_bdev_aio.so.6.0 00:05:35.088 SO libspdk_bdev_raid.so.6.0 00:05:35.088 SYMLINK libspdk_bdev_aio.so 00:05:35.088 LIB libspdk_bdev_ftl.a 00:05:35.346 SO libspdk_bdev_ftl.so.6.0 00:05:35.346 SYMLINK libspdk_bdev_raid.so 00:05:35.346 SYMLINK libspdk_bdev_ftl.so 00:05:35.346 LIB libspdk_bdev_iscsi.a 00:05:35.346 SO libspdk_bdev_iscsi.so.6.0 00:05:35.346 SYMLINK libspdk_bdev_iscsi.so 00:05:35.660 LIB libspdk_bdev_virtio.a 00:05:35.660 SO libspdk_bdev_virtio.so.6.0 00:05:35.660 SYMLINK libspdk_bdev_virtio.so 00:05:36.228 LIB libspdk_bdev_nvme.a 00:05:36.228 SO libspdk_bdev_nvme.so.7.1 00:05:36.487 SYMLINK libspdk_bdev_nvme.so 00:05:36.746 CC module/event/subsystems/sock/sock.o 00:05:36.746 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:36.746 CC module/event/subsystems/fsdev/fsdev.o 00:05:36.746 CC module/event/subsystems/vmd/vmd.o 00:05:36.746 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:36.746 CC module/event/subsystems/scheduler/scheduler.o 00:05:36.746 CC module/event/subsystems/keyring/keyring.o 00:05:36.746 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:36.746 CC module/event/subsystems/iobuf/iobuf.o 00:05:37.005 LIB libspdk_event_fsdev.a 00:05:37.005 LIB libspdk_event_keyring.a 00:05:37.005 LIB libspdk_event_sock.a 00:05:37.005 SO libspdk_event_keyring.so.1.0 00:05:37.005 SO libspdk_event_fsdev.so.1.0 00:05:37.005 LIB libspdk_event_vhost_blk.a 00:05:37.005 SO libspdk_event_sock.so.5.0 00:05:37.005 LIB libspdk_event_vmd.a 00:05:37.005 LIB libspdk_event_scheduler.a 00:05:37.005 SO libspdk_event_vhost_blk.so.3.0 00:05:37.005 SO libspdk_event_scheduler.so.4.0 00:05:37.005 LIB libspdk_event_iobuf.a 00:05:37.005 SO libspdk_event_vmd.so.6.0 00:05:37.005 SYMLINK libspdk_event_fsdev.so 00:05:37.005 SYMLINK libspdk_event_keyring.so 00:05:37.005 SYMLINK libspdk_event_sock.so 00:05:37.005 SYMLINK libspdk_event_vhost_blk.so 00:05:37.005 SO libspdk_event_iobuf.so.3.0 00:05:37.005 SYMLINK libspdk_event_scheduler.so 00:05:37.005 SYMLINK libspdk_event_vmd.so 00:05:37.264 SYMLINK libspdk_event_iobuf.so 00:05:37.524 CC module/event/subsystems/accel/accel.o 00:05:37.524 LIB libspdk_event_accel.a 00:05:37.818 SO libspdk_event_accel.so.6.0 00:05:37.818 SYMLINK libspdk_event_accel.so 00:05:38.076 CC module/event/subsystems/bdev/bdev.o 00:05:38.334 LIB libspdk_event_bdev.a 00:05:38.334 SO libspdk_event_bdev.so.6.0 00:05:38.334 SYMLINK libspdk_event_bdev.so 00:05:38.593 CC module/event/subsystems/ublk/ublk.o 00:05:38.593 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:38.593 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:38.593 CC module/event/subsystems/scsi/scsi.o 00:05:38.593 CC module/event/subsystems/nbd/nbd.o 00:05:38.851 LIB libspdk_event_ublk.a 00:05:38.851 LIB libspdk_event_nbd.a 00:05:38.851 LIB libspdk_event_scsi.a 00:05:38.851 SO libspdk_event_nbd.so.6.0 00:05:38.851 SO libspdk_event_ublk.so.3.0 00:05:38.851 SO libspdk_event_scsi.so.6.0 00:05:38.851 SYMLINK libspdk_event_nbd.so 00:05:38.851 SYMLINK libspdk_event_ublk.so 00:05:38.851 SYMLINK libspdk_event_scsi.so 00:05:38.851 LIB libspdk_event_nvmf.a 00:05:38.851 SO libspdk_event_nvmf.so.6.0 00:05:39.110 SYMLINK libspdk_event_nvmf.so 00:05:39.110 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:39.110 CC module/event/subsystems/iscsi/iscsi.o 00:05:39.368 LIB libspdk_event_vhost_scsi.a 00:05:39.368 LIB libspdk_event_iscsi.a 00:05:39.368 SO libspdk_event_vhost_scsi.so.3.0 00:05:39.368 SO libspdk_event_iscsi.so.6.0 00:05:39.368 SYMLINK libspdk_event_vhost_scsi.so 00:05:39.368 SYMLINK libspdk_event_iscsi.so 00:05:39.626 SO libspdk.so.6.0 00:05:39.626 SYMLINK libspdk.so 00:05:39.885 CXX app/trace/trace.o 00:05:39.885 CC app/trace_record/trace_record.o 00:05:39.885 CC app/nvmf_tgt/nvmf_main.o 00:05:39.885 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:39.885 CC app/iscsi_tgt/iscsi_tgt.o 00:05:39.885 CC app/spdk_tgt/spdk_tgt.o 00:05:40.143 CC test/thread/poller_perf/poller_perf.o 00:05:40.143 CC examples/ioat/perf/perf.o 00:05:40.143 CC examples/util/zipf/zipf.o 00:05:40.143 CC test/dma/test_dma/test_dma.o 00:05:40.143 LINK nvmf_tgt 00:05:40.143 LINK spdk_trace_record 00:05:40.143 LINK interrupt_tgt 00:05:40.143 LINK poller_perf 00:05:40.143 LINK iscsi_tgt 00:05:40.401 LINK zipf 00:05:40.401 LINK spdk_tgt 00:05:40.401 LINK ioat_perf 00:05:40.401 LINK spdk_trace 00:05:40.659 CC app/spdk_lspci/spdk_lspci.o 00:05:40.659 CC app/spdk_nvme_perf/perf.o 00:05:40.659 CC examples/ioat/verify/verify.o 00:05:40.659 CC app/spdk_nvme_identify/identify.o 00:05:40.659 CC app/spdk_nvme_discover/discovery_aer.o 00:05:40.659 CC app/spdk_top/spdk_top.o 00:05:40.659 LINK test_dma 00:05:40.659 LINK spdk_lspci 00:05:40.659 CC app/spdk_dd/spdk_dd.o 00:05:40.659 CC examples/thread/thread/thread_ex.o 00:05:40.916 LINK verify 00:05:40.916 LINK spdk_nvme_discover 00:05:40.916 CC test/app/bdev_svc/bdev_svc.o 00:05:40.916 CC test/app/histogram_perf/histogram_perf.o 00:05:41.174 CC test/app/jsoncat/jsoncat.o 00:05:41.174 LINK thread 00:05:41.174 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:41.174 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:41.174 LINK bdev_svc 00:05:41.174 LINK histogram_perf 00:05:41.174 LINK jsoncat 00:05:41.174 LINK spdk_dd 00:05:41.432 LINK spdk_nvme_identify 00:05:41.432 LINK spdk_nvme_perf 00:05:41.432 LINK nvme_fuzz 00:05:41.432 LINK spdk_top 00:05:41.432 CC examples/sock/hello_world/hello_sock.o 00:05:41.689 CC examples/vmd/lsvmd/lsvmd.o 00:05:41.689 CC examples/idxd/perf/perf.o 00:05:41.689 TEST_HEADER include/spdk/accel.h 00:05:41.689 TEST_HEADER include/spdk/accel_module.h 00:05:41.689 TEST_HEADER include/spdk/assert.h 00:05:41.689 TEST_HEADER include/spdk/barrier.h 00:05:41.689 TEST_HEADER include/spdk/base64.h 00:05:41.689 TEST_HEADER include/spdk/bdev.h 00:05:41.689 TEST_HEADER include/spdk/bdev_module.h 00:05:41.689 TEST_HEADER include/spdk/bdev_zone.h 00:05:41.689 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:41.689 TEST_HEADER include/spdk/bit_array.h 00:05:41.689 TEST_HEADER include/spdk/bit_pool.h 00:05:41.689 TEST_HEADER include/spdk/blob_bdev.h 00:05:41.689 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:41.689 TEST_HEADER include/spdk/blobfs.h 00:05:41.689 TEST_HEADER include/spdk/blob.h 00:05:41.689 TEST_HEADER include/spdk/conf.h 00:05:41.689 TEST_HEADER include/spdk/config.h 00:05:41.689 TEST_HEADER include/spdk/cpuset.h 00:05:41.689 TEST_HEADER include/spdk/crc16.h 00:05:41.689 TEST_HEADER include/spdk/crc32.h 00:05:41.689 TEST_HEADER include/spdk/crc64.h 00:05:41.689 TEST_HEADER include/spdk/dif.h 00:05:41.689 TEST_HEADER include/spdk/dma.h 00:05:41.689 TEST_HEADER include/spdk/endian.h 00:05:41.689 TEST_HEADER include/spdk/env_dpdk.h 00:05:41.689 TEST_HEADER include/spdk/env.h 00:05:41.689 TEST_HEADER include/spdk/event.h 00:05:41.689 TEST_HEADER include/spdk/fd_group.h 00:05:41.689 TEST_HEADER include/spdk/fd.h 00:05:41.689 TEST_HEADER include/spdk/file.h 00:05:41.689 TEST_HEADER include/spdk/fsdev.h 00:05:41.689 TEST_HEADER include/spdk/fsdev_module.h 00:05:41.689 TEST_HEADER include/spdk/ftl.h 00:05:41.689 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:41.689 TEST_HEADER include/spdk/gpt_spec.h 00:05:41.689 TEST_HEADER include/spdk/hexlify.h 00:05:41.689 CC test/app/stub/stub.o 00:05:41.689 TEST_HEADER include/spdk/histogram_data.h 00:05:41.689 TEST_HEADER include/spdk/idxd.h 00:05:41.689 TEST_HEADER include/spdk/idxd_spec.h 00:05:41.689 TEST_HEADER include/spdk/init.h 00:05:41.689 TEST_HEADER include/spdk/ioat.h 00:05:41.689 TEST_HEADER include/spdk/ioat_spec.h 00:05:41.689 TEST_HEADER include/spdk/iscsi_spec.h 00:05:41.689 TEST_HEADER include/spdk/json.h 00:05:41.689 TEST_HEADER include/spdk/jsonrpc.h 00:05:41.689 LINK lsvmd 00:05:41.689 TEST_HEADER include/spdk/keyring.h 00:05:41.689 TEST_HEADER include/spdk/keyring_module.h 00:05:41.689 TEST_HEADER include/spdk/likely.h 00:05:41.689 TEST_HEADER include/spdk/log.h 00:05:41.689 TEST_HEADER include/spdk/lvol.h 00:05:41.689 TEST_HEADER include/spdk/md5.h 00:05:41.689 TEST_HEADER include/spdk/memory.h 00:05:41.689 TEST_HEADER include/spdk/mmio.h 00:05:41.689 TEST_HEADER include/spdk/nbd.h 00:05:41.689 TEST_HEADER include/spdk/net.h 00:05:41.689 TEST_HEADER include/spdk/notify.h 00:05:41.689 TEST_HEADER include/spdk/nvme.h 00:05:41.689 TEST_HEADER include/spdk/nvme_intel.h 00:05:41.689 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:41.689 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:41.689 TEST_HEADER include/spdk/nvme_spec.h 00:05:41.689 TEST_HEADER include/spdk/nvme_zns.h 00:05:41.689 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:41.689 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:41.689 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:41.689 TEST_HEADER include/spdk/nvmf.h 00:05:41.689 TEST_HEADER include/spdk/nvmf_spec.h 00:05:41.689 TEST_HEADER include/spdk/nvmf_transport.h 00:05:41.947 TEST_HEADER include/spdk/opal.h 00:05:41.947 TEST_HEADER include/spdk/opal_spec.h 00:05:41.947 TEST_HEADER include/spdk/pci_ids.h 00:05:41.947 TEST_HEADER include/spdk/pipe.h 00:05:41.947 TEST_HEADER include/spdk/queue.h 00:05:41.947 TEST_HEADER include/spdk/reduce.h 00:05:41.947 TEST_HEADER include/spdk/rpc.h 00:05:41.947 LINK hello_sock 00:05:41.947 TEST_HEADER include/spdk/scheduler.h 00:05:41.947 TEST_HEADER include/spdk/scsi.h 00:05:41.947 TEST_HEADER include/spdk/scsi_spec.h 00:05:41.947 TEST_HEADER include/spdk/sock.h 00:05:41.947 TEST_HEADER include/spdk/stdinc.h 00:05:41.947 TEST_HEADER include/spdk/string.h 00:05:41.947 TEST_HEADER include/spdk/thread.h 00:05:41.947 TEST_HEADER include/spdk/trace.h 00:05:41.947 TEST_HEADER include/spdk/trace_parser.h 00:05:41.947 TEST_HEADER include/spdk/tree.h 00:05:41.947 TEST_HEADER include/spdk/ublk.h 00:05:41.947 TEST_HEADER include/spdk/util.h 00:05:41.947 TEST_HEADER include/spdk/uuid.h 00:05:41.947 TEST_HEADER include/spdk/version.h 00:05:41.947 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:41.947 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:41.947 CC app/vhost/vhost.o 00:05:41.947 TEST_HEADER include/spdk/vhost.h 00:05:41.947 TEST_HEADER include/spdk/vmd.h 00:05:41.947 TEST_HEADER include/spdk/xor.h 00:05:41.947 TEST_HEADER include/spdk/zipf.h 00:05:41.947 CXX test/cpp_headers/accel.o 00:05:41.947 CC app/fio/nvme/fio_plugin.o 00:05:41.947 LINK stub 00:05:41.947 LINK hello_fsdev 00:05:41.947 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:41.947 LINK idxd_perf 00:05:41.947 CXX test/cpp_headers/accel_module.o 00:05:41.947 CC examples/vmd/led/led.o 00:05:42.206 LINK vhost 00:05:42.206 CXX test/cpp_headers/assert.o 00:05:42.206 LINK led 00:05:42.206 CC examples/accel/perf/accel_perf.o 00:05:42.464 CC examples/nvme/hello_world/hello_world.o 00:05:42.464 CXX test/cpp_headers/barrier.o 00:05:42.464 CC examples/blob/hello_world/hello_blob.o 00:05:42.464 CXX test/cpp_headers/base64.o 00:05:42.464 LINK vhost_fuzz 00:05:42.464 LINK spdk_nvme 00:05:42.464 CC test/env/mem_callbacks/mem_callbacks.o 00:05:42.464 CC app/fio/bdev/fio_plugin.o 00:05:42.464 CXX test/cpp_headers/bdev.o 00:05:42.722 LINK hello_world 00:05:42.722 LINK hello_blob 00:05:42.722 CC examples/nvme/reconnect/reconnect.o 00:05:42.722 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:42.722 CC examples/nvme/arbitration/arbitration.o 00:05:42.722 CXX test/cpp_headers/bdev_module.o 00:05:42.722 LINK accel_perf 00:05:42.722 CXX test/cpp_headers/bdev_zone.o 00:05:42.980 LINK iscsi_fuzz 00:05:42.980 CXX test/cpp_headers/bit_array.o 00:05:42.980 CXX test/cpp_headers/bit_pool.o 00:05:42.980 CC examples/blob/cli/blobcli.o 00:05:42.980 LINK spdk_bdev 00:05:42.980 LINK reconnect 00:05:42.980 LINK arbitration 00:05:43.238 LINK mem_callbacks 00:05:43.238 CXX test/cpp_headers/blob_bdev.o 00:05:43.238 CC test/env/vtophys/vtophys.o 00:05:43.238 LINK nvme_manage 00:05:43.238 CC examples/bdev/hello_world/hello_bdev.o 00:05:43.238 CXX test/cpp_headers/blobfs_bdev.o 00:05:43.238 CC examples/bdev/bdevperf/bdevperf.o 00:05:43.238 CXX test/cpp_headers/blobfs.o 00:05:43.238 CXX test/cpp_headers/blob.o 00:05:43.238 CC test/event/event_perf/event_perf.o 00:05:43.495 LINK vtophys 00:05:43.495 CXX test/cpp_headers/conf.o 00:05:43.495 LINK event_perf 00:05:43.495 CC examples/nvme/hotplug/hotplug.o 00:05:43.495 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:43.495 LINK blobcli 00:05:43.495 LINK hello_bdev 00:05:43.495 CC examples/nvme/abort/abort.o 00:05:43.495 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:43.752 CXX test/cpp_headers/config.o 00:05:43.752 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:43.752 CXX test/cpp_headers/cpuset.o 00:05:43.752 LINK cmb_copy 00:05:43.752 LINK pmr_persistence 00:05:43.752 CXX test/cpp_headers/crc16.o 00:05:43.752 LINK hotplug 00:05:43.752 CC test/event/reactor/reactor.o 00:05:44.011 CC test/env/memory/memory_ut.o 00:05:44.011 LINK env_dpdk_post_init 00:05:44.011 LINK reactor 00:05:44.011 LINK abort 00:05:44.011 CXX test/cpp_headers/crc32.o 00:05:44.011 CC test/event/reactor_perf/reactor_perf.o 00:05:44.270 CC test/rpc_client/rpc_client_test.o 00:05:44.270 LINK bdevperf 00:05:44.270 CC test/nvme/aer/aer.o 00:05:44.270 LINK reactor_perf 00:05:44.270 CXX test/cpp_headers/crc64.o 00:05:44.270 CC test/nvme/reset/reset.o 00:05:44.270 CC test/nvme/sgl/sgl.o 00:05:44.270 CC test/env/pci/pci_ut.o 00:05:44.270 CC test/accel/dif/dif.o 00:05:44.270 LINK rpc_client_test 00:05:44.528 CXX test/cpp_headers/dif.o 00:05:44.528 CC test/event/app_repeat/app_repeat.o 00:05:44.528 CXX test/cpp_headers/dma.o 00:05:44.528 LINK aer 00:05:44.528 LINK reset 00:05:44.528 LINK sgl 00:05:44.528 CC examples/nvmf/nvmf/nvmf.o 00:05:44.787 LINK app_repeat 00:05:44.787 CXX test/cpp_headers/endian.o 00:05:44.787 CC test/nvme/e2edp/nvme_dp.o 00:05:44.787 LINK pci_ut 00:05:44.787 CXX test/cpp_headers/env_dpdk.o 00:05:44.787 CC test/nvme/overhead/overhead.o 00:05:44.787 CC test/nvme/err_injection/err_injection.o 00:05:45.045 LINK nvmf 00:05:45.045 LINK dif 00:05:45.045 CXX test/cpp_headers/env.o 00:05:45.045 CC test/event/scheduler/scheduler.o 00:05:45.045 LINK nvme_dp 00:05:45.045 LINK err_injection 00:05:45.045 LINK overhead 00:05:45.045 CC test/blobfs/mkfs/mkfs.o 00:05:45.303 LINK memory_ut 00:05:45.303 CXX test/cpp_headers/event.o 00:05:45.303 CXX test/cpp_headers/fd_group.o 00:05:45.303 LINK scheduler 00:05:45.303 CXX test/cpp_headers/fd.o 00:05:45.303 CC test/lvol/esnap/esnap.o 00:05:45.303 CC test/nvme/startup/startup.o 00:05:45.303 CXX test/cpp_headers/file.o 00:05:45.303 CXX test/cpp_headers/fsdev.o 00:05:45.303 LINK mkfs 00:05:45.303 CXX test/cpp_headers/fsdev_module.o 00:05:45.562 CXX test/cpp_headers/ftl.o 00:05:45.562 CC test/nvme/reserve/reserve.o 00:05:45.563 CXX test/cpp_headers/fuse_dispatcher.o 00:05:45.563 CXX test/cpp_headers/gpt_spec.o 00:05:45.563 LINK startup 00:05:45.563 CXX test/cpp_headers/hexlify.o 00:05:45.563 CC test/bdev/bdevio/bdevio.o 00:05:45.821 CC test/nvme/simple_copy/simple_copy.o 00:05:45.821 CXX test/cpp_headers/histogram_data.o 00:05:45.821 CC test/nvme/connect_stress/connect_stress.o 00:05:45.821 LINK reserve 00:05:45.821 CC test/nvme/boot_partition/boot_partition.o 00:05:45.821 CC test/nvme/fused_ordering/fused_ordering.o 00:05:45.821 CC test/nvme/compliance/nvme_compliance.o 00:05:45.821 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:45.821 CXX test/cpp_headers/idxd.o 00:05:46.079 LINK connect_stress 00:05:46.079 LINK boot_partition 00:05:46.079 LINK simple_copy 00:05:46.079 CC test/nvme/fdp/fdp.o 00:05:46.079 LINK doorbell_aers 00:05:46.079 LINK fused_ordering 00:05:46.079 LINK bdevio 00:05:46.079 CXX test/cpp_headers/idxd_spec.o 00:05:46.079 LINK nvme_compliance 00:05:46.079 CXX test/cpp_headers/init.o 00:05:46.079 CXX test/cpp_headers/ioat.o 00:05:46.337 CXX test/cpp_headers/ioat_spec.o 00:05:46.337 CXX test/cpp_headers/iscsi_spec.o 00:05:46.337 CC test/nvme/cuse/cuse.o 00:05:46.337 CXX test/cpp_headers/json.o 00:05:46.337 CXX test/cpp_headers/jsonrpc.o 00:05:46.337 CXX test/cpp_headers/keyring.o 00:05:46.337 CXX test/cpp_headers/keyring_module.o 00:05:46.337 LINK fdp 00:05:46.337 CXX test/cpp_headers/likely.o 00:05:46.337 CXX test/cpp_headers/log.o 00:05:46.337 CXX test/cpp_headers/lvol.o 00:05:46.594 CXX test/cpp_headers/md5.o 00:05:46.594 CXX test/cpp_headers/memory.o 00:05:46.594 CXX test/cpp_headers/mmio.o 00:05:46.594 CXX test/cpp_headers/nbd.o 00:05:46.594 CXX test/cpp_headers/net.o 00:05:46.594 CXX test/cpp_headers/notify.o 00:05:46.594 CXX test/cpp_headers/nvme.o 00:05:46.594 CXX test/cpp_headers/nvme_intel.o 00:05:46.594 CXX test/cpp_headers/nvme_ocssd.o 00:05:46.594 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:46.594 CXX test/cpp_headers/nvme_spec.o 00:05:46.852 CXX test/cpp_headers/nvme_zns.o 00:05:46.852 CXX test/cpp_headers/nvmf_cmd.o 00:05:46.852 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:46.852 CXX test/cpp_headers/nvmf.o 00:05:46.852 CXX test/cpp_headers/nvmf_spec.o 00:05:46.852 CXX test/cpp_headers/nvmf_transport.o 00:05:46.852 CXX test/cpp_headers/opal.o 00:05:46.852 CXX test/cpp_headers/opal_spec.o 00:05:46.852 CXX test/cpp_headers/pci_ids.o 00:05:46.852 CXX test/cpp_headers/pipe.o 00:05:47.109 CXX test/cpp_headers/queue.o 00:05:47.109 CXX test/cpp_headers/reduce.o 00:05:47.109 CXX test/cpp_headers/rpc.o 00:05:47.109 CXX test/cpp_headers/scheduler.o 00:05:47.109 CXX test/cpp_headers/scsi.o 00:05:47.109 CXX test/cpp_headers/scsi_spec.o 00:05:47.109 CXX test/cpp_headers/sock.o 00:05:47.109 CXX test/cpp_headers/stdinc.o 00:05:47.109 CXX test/cpp_headers/string.o 00:05:47.109 CXX test/cpp_headers/thread.o 00:05:47.110 CXX test/cpp_headers/trace.o 00:05:47.110 CXX test/cpp_headers/trace_parser.o 00:05:47.367 CXX test/cpp_headers/tree.o 00:05:47.367 CXX test/cpp_headers/ublk.o 00:05:47.367 CXX test/cpp_headers/util.o 00:05:47.367 CXX test/cpp_headers/uuid.o 00:05:47.367 CXX test/cpp_headers/version.o 00:05:47.367 CXX test/cpp_headers/vfio_user_pci.o 00:05:47.367 CXX test/cpp_headers/vfio_user_spec.o 00:05:47.367 CXX test/cpp_headers/vhost.o 00:05:47.367 CXX test/cpp_headers/vmd.o 00:05:47.367 CXX test/cpp_headers/xor.o 00:05:47.367 CXX test/cpp_headers/zipf.o 00:05:47.625 LINK cuse 00:05:50.931 LINK esnap 00:05:50.931 00:05:50.931 real 1m43.636s 00:05:50.931 user 9m14.106s 00:05:50.931 sys 1m57.983s 00:05:50.931 03:55:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:50.931 ************************************ 00:05:50.931 END TEST make 00:05:50.931 ************************************ 00:05:50.931 03:55:32 make -- common/autotest_common.sh@10 -- $ set +x 00:05:51.191 03:55:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:51.191 03:55:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:51.191 03:55:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:51.191 03:55:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.191 03:55:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:51.191 03:55:32 -- pm/common@44 -- $ pid=5294 00:05:51.191 03:55:32 -- pm/common@50 -- $ kill -TERM 5294 00:05:51.191 03:55:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.191 03:55:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:51.191 03:55:32 -- pm/common@44 -- $ pid=5296 00:05:51.191 03:55:32 -- pm/common@50 -- $ kill -TERM 5296 00:05:51.191 03:55:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:51.191 03:55:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:51.191 03:55:33 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.191 03:55:33 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.191 03:55:33 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.191 03:55:33 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.191 03:55:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.191 03:55:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.191 03:55:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.191 03:55:33 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.191 03:55:33 -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.191 03:55:33 -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.191 03:55:33 -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.191 03:55:33 -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.191 03:55:33 -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.191 03:55:33 -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.191 03:55:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.191 03:55:33 -- scripts/common.sh@344 -- # case "$op" in 00:05:51.191 03:55:33 -- scripts/common.sh@345 -- # : 1 00:05:51.191 03:55:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.191 03:55:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.191 03:55:33 -- scripts/common.sh@365 -- # decimal 1 00:05:51.191 03:55:33 -- scripts/common.sh@353 -- # local d=1 00:05:51.191 03:55:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.191 03:55:33 -- scripts/common.sh@355 -- # echo 1 00:05:51.191 03:55:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.191 03:55:33 -- scripts/common.sh@366 -- # decimal 2 00:05:51.191 03:55:33 -- scripts/common.sh@353 -- # local d=2 00:05:51.191 03:55:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.191 03:55:33 -- scripts/common.sh@355 -- # echo 2 00:05:51.191 03:55:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.191 03:55:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.191 03:55:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.191 03:55:33 -- scripts/common.sh@368 -- # return 0 00:05:51.191 03:55:33 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.191 03:55:33 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.191 --rc genhtml_branch_coverage=1 00:05:51.191 --rc genhtml_function_coverage=1 00:05:51.191 --rc genhtml_legend=1 00:05:51.191 --rc geninfo_all_blocks=1 00:05:51.191 --rc geninfo_unexecuted_blocks=1 00:05:51.191 00:05:51.191 ' 00:05:51.191 03:55:33 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.191 --rc genhtml_branch_coverage=1 00:05:51.191 --rc genhtml_function_coverage=1 00:05:51.191 --rc genhtml_legend=1 00:05:51.191 --rc geninfo_all_blocks=1 00:05:51.191 --rc geninfo_unexecuted_blocks=1 00:05:51.191 00:05:51.191 ' 00:05:51.191 03:55:33 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.191 --rc genhtml_branch_coverage=1 00:05:51.191 --rc genhtml_function_coverage=1 00:05:51.191 --rc genhtml_legend=1 00:05:51.191 --rc geninfo_all_blocks=1 00:05:51.191 --rc geninfo_unexecuted_blocks=1 00:05:51.191 00:05:51.191 ' 00:05:51.191 03:55:33 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.191 --rc genhtml_branch_coverage=1 00:05:51.191 --rc genhtml_function_coverage=1 00:05:51.191 --rc genhtml_legend=1 00:05:51.191 --rc geninfo_all_blocks=1 00:05:51.191 --rc geninfo_unexecuted_blocks=1 00:05:51.191 00:05:51.191 ' 00:05:51.191 03:55:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.191 03:55:33 -- nvmf/common.sh@7 -- # uname -s 00:05:51.191 03:55:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.191 03:55:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.191 03:55:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.191 03:55:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.191 03:55:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.191 03:55:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.191 03:55:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.191 03:55:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.191 03:55:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.191 03:55:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.451 03:55:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:05:51.451 03:55:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:05:51.451 03:55:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.451 03:55:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.451 03:55:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:51.451 03:55:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.451 03:55:33 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.451 03:55:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.451 03:55:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.451 03:55:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.451 03:55:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.451 03:55:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.451 03:55:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.451 03:55:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.451 03:55:33 -- paths/export.sh@5 -- # export PATH 00:05:51.451 03:55:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.451 03:55:33 -- nvmf/common.sh@51 -- # : 0 00:05:51.451 03:55:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.451 03:55:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.451 03:55:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.451 03:55:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.451 03:55:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.451 03:55:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.451 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.451 03:55:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.451 03:55:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.452 03:55:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.452 03:55:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:51.452 03:55:33 -- spdk/autotest.sh@32 -- # uname -s 00:05:51.452 03:55:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:51.452 03:55:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:51.452 03:55:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:51.452 03:55:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:51.452 03:55:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:51.452 03:55:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:51.452 03:55:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:51.452 03:55:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:51.452 03:55:33 -- spdk/autotest.sh@48 -- # udevadm_pid=54556 00:05:51.452 03:55:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:51.452 03:55:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:51.452 03:55:33 -- pm/common@17 -- # local monitor 00:05:51.452 03:55:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.452 03:55:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:51.452 03:55:33 -- pm/common@21 -- # date +%s 00:05:51.452 03:55:33 -- pm/common@25 -- # sleep 1 00:05:51.452 03:55:33 -- pm/common@21 -- # date +%s 00:05:51.452 03:55:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733716533 00:05:51.452 03:55:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733716533 00:05:51.452 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733716533_collect-cpu-load.pm.log 00:05:51.452 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733716533_collect-vmstat.pm.log 00:05:52.389 03:55:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:52.389 03:55:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:52.389 03:55:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.389 03:55:34 -- common/autotest_common.sh@10 -- # set +x 00:05:52.389 03:55:34 -- spdk/autotest.sh@59 -- # create_test_list 00:05:52.389 03:55:34 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:52.389 03:55:34 -- common/autotest_common.sh@10 -- # set +x 00:05:52.389 03:55:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:52.389 03:55:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:52.389 03:55:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:52.389 03:55:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:52.389 03:55:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:52.389 03:55:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:52.389 03:55:34 -- common/autotest_common.sh@1457 -- # uname 00:05:52.389 03:55:34 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:52.389 03:55:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:52.389 03:55:34 -- common/autotest_common.sh@1477 -- # uname 00:05:52.389 03:55:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:52.389 03:55:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:52.389 03:55:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:52.646 lcov: LCOV version 1.15 00:05:52.647 03:55:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:10.759 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:10.759 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:25.640 03:56:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:25.640 03:56:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.640 03:56:05 -- common/autotest_common.sh@10 -- # set +x 00:06:25.640 03:56:05 -- spdk/autotest.sh@78 -- # rm -f 00:06:25.640 03:56:05 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:25.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:25.640 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:25.640 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:25.640 03:56:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:25.640 03:56:05 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:25.640 03:56:05 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:25.640 03:56:05 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:25.640 03:56:05 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:25.640 03:56:05 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:25.640 03:56:05 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:25.640 03:56:05 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:25.640 03:56:05 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:25.640 03:56:05 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:25.640 03:56:05 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:25.640 03:56:05 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:25.640 03:56:05 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:25.640 03:56:05 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:25.640 03:56:05 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:25.640 03:56:05 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:25.640 03:56:05 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:25.640 03:56:05 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:25.640 03:56:05 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:25.640 03:56:05 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:25.640 03:56:05 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:25.640 03:56:05 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:06:25.640 03:56:05 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:25.640 03:56:05 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:25.640 03:56:05 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:25.640 03:56:05 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:25.640 03:56:05 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:06:25.640 03:56:05 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:25.640 03:56:05 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:25.640 03:56:05 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:25.640 03:56:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:25.640 03:56:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:25.640 03:56:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:25.640 03:56:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:25.640 03:56:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:25.640 03:56:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:25.640 No valid GPT data, bailing 00:06:25.640 03:56:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:25.640 03:56:06 -- scripts/common.sh@394 -- # pt= 00:06:25.640 03:56:06 -- scripts/common.sh@395 -- # return 1 00:06:25.640 03:56:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:25.641 1+0 records in 00:06:25.641 1+0 records out 00:06:25.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529113 s, 198 MB/s 00:06:25.641 03:56:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:25.641 03:56:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:25.641 03:56:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:25.641 03:56:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:25.641 03:56:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:25.641 No valid GPT data, bailing 00:06:25.641 03:56:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:25.641 03:56:06 -- scripts/common.sh@394 -- # pt= 00:06:25.641 03:56:06 -- scripts/common.sh@395 -- # return 1 00:06:25.641 03:56:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:25.641 1+0 records in 00:06:25.641 1+0 records out 00:06:25.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515889 s, 203 MB/s 00:06:25.641 03:56:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:25.641 03:56:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:25.641 03:56:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:25.641 03:56:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:25.641 03:56:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:25.641 No valid GPT data, bailing 00:06:25.641 03:56:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:25.641 03:56:06 -- scripts/common.sh@394 -- # pt= 00:06:25.641 03:56:06 -- scripts/common.sh@395 -- # return 1 00:06:25.641 03:56:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:25.641 1+0 records in 00:06:25.641 1+0 records out 00:06:25.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512887 s, 204 MB/s 00:06:25.641 03:56:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:25.641 03:56:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:25.641 03:56:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:25.641 03:56:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:25.641 03:56:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:25.641 No valid GPT data, bailing 00:06:25.641 03:56:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:25.641 03:56:06 -- scripts/common.sh@394 -- # pt= 00:06:25.641 03:56:06 -- scripts/common.sh@395 -- # return 1 00:06:25.641 03:56:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:25.641 1+0 records in 00:06:25.641 1+0 records out 00:06:25.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468654 s, 224 MB/s 00:06:25.641 03:56:06 -- spdk/autotest.sh@105 -- # sync 00:06:25.641 03:56:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:25.641 03:56:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:25.641 03:56:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:26.572 03:56:08 -- spdk/autotest.sh@111 -- # uname -s 00:06:26.572 03:56:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:26.572 03:56:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:26.572 03:56:08 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:27.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:27.505 Hugepages 00:06:27.505 node hugesize free / total 00:06:27.505 node0 1048576kB 0 / 0 00:06:27.505 node0 2048kB 0 / 0 00:06:27.505 00:06:27.505 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:27.505 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:27.505 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:27.505 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:27.505 03:56:09 -- spdk/autotest.sh@117 -- # uname -s 00:06:27.505 03:56:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:27.505 03:56:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:27.505 03:56:09 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:28.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:28.329 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:28.329 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:28.329 03:56:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:29.263 03:56:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:29.263 03:56:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:29.264 03:56:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:29.264 03:56:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:29.264 03:56:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:29.264 03:56:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:29.264 03:56:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:29.521 03:56:11 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:29.521 03:56:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:29.521 03:56:11 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:29.521 03:56:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:29.521 03:56:11 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:29.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:29.779 Waiting for block devices as requested 00:06:29.779 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.038 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.038 03:56:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:30.038 03:56:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:30.038 03:56:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:30.038 03:56:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:30.038 03:56:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:30.038 03:56:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:30.038 03:56:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:30.038 03:56:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:30.038 03:56:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:30.038 03:56:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:30.038 03:56:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:30.038 03:56:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:30.038 03:56:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:30.038 03:56:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:30.038 03:56:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:30.038 03:56:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:30.038 03:56:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:30.038 03:56:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:30.038 03:56:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:30.038 03:56:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:30.038 03:56:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:30.038 03:56:11 -- common/autotest_common.sh@1543 -- # continue 00:06:30.038 03:56:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:30.038 03:56:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:30.038 03:56:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:30.038 03:56:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:30.039 03:56:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:30.039 03:56:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:30.039 03:56:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:30.039 03:56:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:30.039 03:56:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:30.039 03:56:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:30.039 03:56:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:30.039 03:56:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:30.039 03:56:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:30.039 03:56:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:30.039 03:56:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:30.039 03:56:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:30.039 03:56:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:30.039 03:56:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:30.039 03:56:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:30.039 03:56:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:30.039 03:56:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:30.039 03:56:11 -- common/autotest_common.sh@1543 -- # continue 00:06:30.039 03:56:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:30.039 03:56:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.039 03:56:11 -- common/autotest_common.sh@10 -- # set +x 00:06:30.039 03:56:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:30.039 03:56:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.039 03:56:11 -- common/autotest_common.sh@10 -- # set +x 00:06:30.298 03:56:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:30.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:30.903 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:30.903 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:31.162 03:56:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:31.162 03:56:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.162 03:56:12 -- common/autotest_common.sh@10 -- # set +x 00:06:31.162 03:56:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:31.162 03:56:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:31.162 03:56:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:31.162 03:56:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:31.162 03:56:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:31.162 03:56:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:31.162 03:56:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:31.162 03:56:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:31.162 03:56:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:31.162 03:56:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:31.162 03:56:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:31.162 03:56:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:31.162 03:56:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:31.162 03:56:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:31.162 03:56:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:31.162 03:56:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:31.162 03:56:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:31.162 03:56:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:31.162 03:56:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:31.162 03:56:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:31.162 03:56:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:31.162 03:56:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:31.162 03:56:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:31.162 03:56:13 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:31.162 03:56:13 -- common/autotest_common.sh@1572 -- # return 0 00:06:31.162 03:56:13 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:31.162 03:56:13 -- common/autotest_common.sh@1580 -- # return 0 00:06:31.162 03:56:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:31.162 03:56:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:31.162 03:56:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:31.162 03:56:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:31.162 03:56:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:31.162 03:56:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.162 03:56:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.162 03:56:13 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:31.162 03:56:13 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:31.162 03:56:13 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:31.162 03:56:13 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:31.162 03:56:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.162 03:56:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.162 03:56:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.162 ************************************ 00:06:31.162 START TEST env 00:06:31.162 ************************************ 00:06:31.162 03:56:13 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:31.421 * Looking for test storage... 00:06:31.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.421 03:56:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.421 03:56:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.421 03:56:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.421 03:56:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.421 03:56:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.421 03:56:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.421 03:56:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.421 03:56:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.421 03:56:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.421 03:56:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.421 03:56:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.421 03:56:13 env -- scripts/common.sh@344 -- # case "$op" in 00:06:31.421 03:56:13 env -- scripts/common.sh@345 -- # : 1 00:06:31.421 03:56:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.421 03:56:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.421 03:56:13 env -- scripts/common.sh@365 -- # decimal 1 00:06:31.421 03:56:13 env -- scripts/common.sh@353 -- # local d=1 00:06:31.421 03:56:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.421 03:56:13 env -- scripts/common.sh@355 -- # echo 1 00:06:31.421 03:56:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.421 03:56:13 env -- scripts/common.sh@366 -- # decimal 2 00:06:31.421 03:56:13 env -- scripts/common.sh@353 -- # local d=2 00:06:31.421 03:56:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.421 03:56:13 env -- scripts/common.sh@355 -- # echo 2 00:06:31.421 03:56:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.421 03:56:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.421 03:56:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.421 03:56:13 env -- scripts/common.sh@368 -- # return 0 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.421 --rc genhtml_branch_coverage=1 00:06:31.421 --rc genhtml_function_coverage=1 00:06:31.421 --rc genhtml_legend=1 00:06:31.421 --rc geninfo_all_blocks=1 00:06:31.421 --rc geninfo_unexecuted_blocks=1 00:06:31.421 00:06:31.421 ' 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.421 --rc genhtml_branch_coverage=1 00:06:31.421 --rc genhtml_function_coverage=1 00:06:31.421 --rc genhtml_legend=1 00:06:31.421 --rc geninfo_all_blocks=1 00:06:31.421 --rc geninfo_unexecuted_blocks=1 00:06:31.421 00:06:31.421 ' 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.421 --rc genhtml_branch_coverage=1 00:06:31.421 --rc genhtml_function_coverage=1 00:06:31.421 --rc genhtml_legend=1 00:06:31.421 --rc geninfo_all_blocks=1 00:06:31.421 --rc geninfo_unexecuted_blocks=1 00:06:31.421 00:06:31.421 ' 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.421 --rc genhtml_branch_coverage=1 00:06:31.421 --rc genhtml_function_coverage=1 00:06:31.421 --rc genhtml_legend=1 00:06:31.421 --rc geninfo_all_blocks=1 00:06:31.421 --rc geninfo_unexecuted_blocks=1 00:06:31.421 00:06:31.421 ' 00:06:31.421 03:56:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.421 03:56:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.421 03:56:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.421 ************************************ 00:06:31.421 START TEST env_memory 00:06:31.421 ************************************ 00:06:31.421 03:56:13 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:31.421 00:06:31.421 00:06:31.421 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.421 http://cunit.sourceforge.net/ 00:06:31.421 00:06:31.421 00:06:31.421 Suite: memory 00:06:31.421 Test: alloc and free memory map ...[2024-12-09 03:56:13.302233] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:31.421 passed 00:06:31.421 Test: mem map translation ...[2024-12-09 03:56:13.333789] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:31.421 [2024-12-09 03:56:13.334018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:31.421 [2024-12-09 03:56:13.334383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:31.421 [2024-12-09 03:56:13.334539] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:31.679 passed 00:06:31.679 Test: mem map registration ...[2024-12-09 03:56:13.399014] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:31.679 [2024-12-09 03:56:13.399326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:31.679 passed 00:06:31.679 Test: mem map adjacent registrations ...passed 00:06:31.679 00:06:31.679 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.679 suites 1 1 n/a 0 0 00:06:31.679 tests 4 4 4 0 0 00:06:31.679 asserts 152 152 152 0 n/a 00:06:31.679 00:06:31.679 Elapsed time = 0.216 seconds 00:06:31.679 00:06:31.679 real 0m0.236s 00:06:31.679 user 0m0.210s 00:06:31.679 sys 0m0.019s 00:06:31.679 03:56:13 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.679 03:56:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:31.679 ************************************ 00:06:31.679 END TEST env_memory 00:06:31.679 ************************************ 00:06:31.679 03:56:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:31.679 03:56:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.679 03:56:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.679 03:56:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.680 ************************************ 00:06:31.680 START TEST env_vtophys 00:06:31.680 ************************************ 00:06:31.680 03:56:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:31.680 EAL: lib.eal log level changed from notice to debug 00:06:31.680 EAL: Detected lcore 0 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 1 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 2 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 3 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 4 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 5 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 6 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 7 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 8 as core 0 on socket 0 00:06:31.680 EAL: Detected lcore 9 as core 0 on socket 0 00:06:31.680 EAL: Maximum logical cores by configuration: 128 00:06:31.680 EAL: Detected CPU lcores: 10 00:06:31.680 EAL: Detected NUMA nodes: 1 00:06:31.680 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:31.680 EAL: Detected shared linkage of DPDK 00:06:31.680 EAL: No shared files mode enabled, IPC will be disabled 00:06:31.680 EAL: Selected IOVA mode 'PA' 00:06:31.680 EAL: Probing VFIO support... 00:06:31.680 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:31.680 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:31.680 EAL: Ask a virtual area of 0x2e000 bytes 00:06:31.680 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:31.680 EAL: Setting up physically contiguous memory... 00:06:31.680 EAL: Setting maximum number of open files to 524288 00:06:31.680 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:31.680 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:31.680 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.680 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:31.680 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.680 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.680 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:31.680 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:31.680 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.680 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:31.680 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.680 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.680 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:31.680 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:31.680 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.680 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:31.680 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.680 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.680 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:31.680 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:31.680 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.680 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:31.680 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.680 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.680 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:31.680 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:31.680 EAL: Hugepages will be freed exactly as allocated. 00:06:31.680 EAL: No shared files mode enabled, IPC is disabled 00:06:31.680 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: TSC frequency is ~2200000 KHz 00:06:31.938 EAL: Main lcore 0 is ready (tid=7f85e7c67a00;cpuset=[0]) 00:06:31.938 EAL: Trying to obtain current memory policy. 00:06:31.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.938 EAL: Restoring previous memory policy: 0 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was expanded by 2MB 00:06:31.938 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:31.938 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:31.938 EAL: Mem event callback 'spdk:(nil)' registered 00:06:31.938 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:31.938 00:06:31.938 00:06:31.938 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.938 http://cunit.sourceforge.net/ 00:06:31.938 00:06:31.938 00:06:31.938 Suite: components_suite 00:06:31.938 Test: vtophys_malloc_test ...passed 00:06:31.938 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:31.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.938 EAL: Restoring previous memory policy: 4 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was expanded by 4MB 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was shrunk by 4MB 00:06:31.938 EAL: Trying to obtain current memory policy. 00:06:31.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.938 EAL: Restoring previous memory policy: 4 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was expanded by 6MB 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was shrunk by 6MB 00:06:31.938 EAL: Trying to obtain current memory policy. 00:06:31.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.938 EAL: Restoring previous memory policy: 4 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was expanded by 10MB 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was shrunk by 10MB 00:06:31.938 EAL: Trying to obtain current memory policy. 00:06:31.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.938 EAL: Restoring previous memory policy: 4 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was expanded by 18MB 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was shrunk by 18MB 00:06:31.938 EAL: Trying to obtain current memory policy. 00:06:31.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.938 EAL: Restoring previous memory policy: 4 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was expanded by 34MB 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was shrunk by 34MB 00:06:31.938 EAL: Trying to obtain current memory policy. 00:06:31.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.938 EAL: Restoring previous memory policy: 4 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was expanded by 66MB 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was shrunk by 66MB 00:06:31.938 EAL: Trying to obtain current memory policy. 00:06:31.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.938 EAL: Restoring previous memory policy: 4 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.938 EAL: request: mp_malloc_sync 00:06:31.938 EAL: No shared files mode enabled, IPC is disabled 00:06:31.938 EAL: Heap on socket 0 was expanded by 130MB 00:06:31.938 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.196 EAL: request: mp_malloc_sync 00:06:32.196 EAL: No shared files mode enabled, IPC is disabled 00:06:32.196 EAL: Heap on socket 0 was shrunk by 130MB 00:06:32.196 EAL: Trying to obtain current memory policy. 00:06:32.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.196 EAL: Restoring previous memory policy: 4 00:06:32.196 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.196 EAL: request: mp_malloc_sync 00:06:32.196 EAL: No shared files mode enabled, IPC is disabled 00:06:32.196 EAL: Heap on socket 0 was expanded by 258MB 00:06:32.196 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.454 EAL: request: mp_malloc_sync 00:06:32.454 EAL: No shared files mode enabled, IPC is disabled 00:06:32.454 EAL: Heap on socket 0 was shrunk by 258MB 00:06:32.454 EAL: Trying to obtain current memory policy. 00:06:32.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.454 EAL: Restoring previous memory policy: 4 00:06:32.454 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.454 EAL: request: mp_malloc_sync 00:06:32.454 EAL: No shared files mode enabled, IPC is disabled 00:06:32.454 EAL: Heap on socket 0 was expanded by 514MB 00:06:32.713 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.971 EAL: request: mp_malloc_sync 00:06:32.972 EAL: No shared files mode enabled, IPC is disabled 00:06:32.972 EAL: Heap on socket 0 was shrunk by 514MB 00:06:32.972 EAL: Trying to obtain current memory policy. 00:06:32.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.230 EAL: Restoring previous memory policy: 4 00:06:33.230 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.230 EAL: request: mp_malloc_sync 00:06:33.230 EAL: No shared files mode enabled, IPC is disabled 00:06:33.230 EAL: Heap on socket 0 was expanded by 1026MB 00:06:33.488 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.746 passed 00:06:33.746 00:06:33.746 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.746 suites 1 1 n/a 0 0 00:06:33.746 tests 2 2 2 0 0 00:06:33.746 asserts 5421 5421 5421 0 n/a 00:06:33.746 00:06:33.746 Elapsed time = 1.882 seconds 00:06:33.746 EAL: request: mp_malloc_sync 00:06:33.746 EAL: No shared files mode enabled, IPC is disabled 00:06:33.746 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:33.746 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.746 EAL: request: mp_malloc_sync 00:06:33.746 EAL: No shared files mode enabled, IPC is disabled 00:06:33.746 EAL: Heap on socket 0 was shrunk by 2MB 00:06:33.746 EAL: No shared files mode enabled, IPC is disabled 00:06:33.746 EAL: No shared files mode enabled, IPC is disabled 00:06:33.746 EAL: No shared files mode enabled, IPC is disabled 00:06:33.746 00:06:33.746 real 0m2.094s 00:06:33.746 user 0m1.206s 00:06:33.746 sys 0m0.752s 00:06:33.746 03:56:15 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.746 03:56:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:33.746 ************************************ 00:06:33.746 END TEST env_vtophys 00:06:33.746 ************************************ 00:06:33.746 03:56:15 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:33.746 03:56:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.746 03:56:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.746 03:56:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.003 ************************************ 00:06:34.003 START TEST env_pci 00:06:34.003 ************************************ 00:06:34.003 03:56:15 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:34.003 00:06:34.003 00:06:34.003 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.003 http://cunit.sourceforge.net/ 00:06:34.003 00:06:34.003 00:06:34.003 Suite: pci 00:06:34.003 Test: pci_hook ...[2024-12-09 03:56:15.714536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56786 has claimed it 00:06:34.003 passed 00:06:34.003 00:06:34.003 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.003 suites 1 1 n/a 0 0 00:06:34.003 tests 1 1 1 0 0 00:06:34.003 asserts 25 25 25 0 n/a 00:06:34.003 00:06:34.003 Elapsed time = 0.002 seconds 00:06:34.003 EAL: Cannot find device (10000:00:01.0) 00:06:34.003 EAL: Failed to attach device on primary process 00:06:34.003 00:06:34.003 real 0m0.022s 00:06:34.003 user 0m0.014s 00:06:34.003 sys 0m0.008s 00:06:34.003 03:56:15 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.003 03:56:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:34.003 ************************************ 00:06:34.003 END TEST env_pci 00:06:34.003 ************************************ 00:06:34.003 03:56:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:34.003 03:56:15 env -- env/env.sh@15 -- # uname 00:06:34.003 03:56:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:34.004 03:56:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:34.004 03:56:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.004 03:56:15 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:34.004 03:56:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.004 03:56:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.004 ************************************ 00:06:34.004 START TEST env_dpdk_post_init 00:06:34.004 ************************************ 00:06:34.004 03:56:15 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:34.004 EAL: Detected CPU lcores: 10 00:06:34.004 EAL: Detected NUMA nodes: 1 00:06:34.004 EAL: Detected shared linkage of DPDK 00:06:34.004 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:34.004 EAL: Selected IOVA mode 'PA' 00:06:34.004 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:34.004 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:34.004 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:34.262 Starting DPDK initialization... 00:06:34.262 Starting SPDK post initialization... 00:06:34.262 SPDK NVMe probe 00:06:34.262 Attaching to 0000:00:10.0 00:06:34.262 Attaching to 0000:00:11.0 00:06:34.262 Attached to 0000:00:10.0 00:06:34.262 Attached to 0000:00:11.0 00:06:34.262 Cleaning up... 00:06:34.262 00:06:34.262 real 0m0.183s 00:06:34.262 user 0m0.048s 00:06:34.262 sys 0m0.035s 00:06:34.262 03:56:15 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.262 03:56:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.262 ************************************ 00:06:34.262 END TEST env_dpdk_post_init 00:06:34.262 ************************************ 00:06:34.262 03:56:16 env -- env/env.sh@26 -- # uname 00:06:34.262 03:56:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:34.262 03:56:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:34.262 03:56:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.262 03:56:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.262 03:56:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.262 ************************************ 00:06:34.262 START TEST env_mem_callbacks 00:06:34.262 ************************************ 00:06:34.262 03:56:16 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:34.262 EAL: Detected CPU lcores: 10 00:06:34.262 EAL: Detected NUMA nodes: 1 00:06:34.262 EAL: Detected shared linkage of DPDK 00:06:34.262 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:34.262 EAL: Selected IOVA mode 'PA' 00:06:34.262 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:34.262 00:06:34.262 00:06:34.262 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.262 http://cunit.sourceforge.net/ 00:06:34.262 00:06:34.262 00:06:34.262 Suite: memory 00:06:34.262 Test: test ... 00:06:34.262 register 0x200000200000 2097152 00:06:34.262 malloc 3145728 00:06:34.262 register 0x200000400000 4194304 00:06:34.262 buf 0x200000500000 len 3145728 PASSED 00:06:34.262 malloc 64 00:06:34.262 buf 0x2000004fff40 len 64 PASSED 00:06:34.262 malloc 4194304 00:06:34.262 register 0x200000800000 6291456 00:06:34.262 buf 0x200000a00000 len 4194304 PASSED 00:06:34.262 free 0x200000500000 3145728 00:06:34.262 free 0x2000004fff40 64 00:06:34.262 unregister 0x200000400000 4194304 PASSED 00:06:34.262 free 0x200000a00000 4194304 00:06:34.262 unregister 0x200000800000 6291456 PASSED 00:06:34.262 malloc 8388608 00:06:34.262 register 0x200000400000 10485760 00:06:34.262 buf 0x200000600000 len 8388608 PASSED 00:06:34.262 free 0x200000600000 8388608 00:06:34.262 unregister 0x200000400000 10485760 PASSED 00:06:34.262 passed 00:06:34.262 00:06:34.262 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.262 suites 1 1 n/a 0 0 00:06:34.262 tests 1 1 1 0 0 00:06:34.262 asserts 15 15 15 0 n/a 00:06:34.262 00:06:34.262 Elapsed time = 0.011 seconds 00:06:34.262 00:06:34.262 real 0m0.150s 00:06:34.262 user 0m0.017s 00:06:34.262 sys 0m0.031s 00:06:34.262 03:56:16 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.263 03:56:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:34.263 ************************************ 00:06:34.263 END TEST env_mem_callbacks 00:06:34.263 ************************************ 00:06:34.520 00:06:34.520 real 0m3.169s 00:06:34.520 user 0m1.708s 00:06:34.520 sys 0m1.109s 00:06:34.520 03:56:16 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.520 03:56:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.520 ************************************ 00:06:34.520 END TEST env 00:06:34.520 ************************************ 00:06:34.520 03:56:16 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:34.520 03:56:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.520 03:56:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.520 03:56:16 -- common/autotest_common.sh@10 -- # set +x 00:06:34.520 ************************************ 00:06:34.520 START TEST rpc 00:06:34.520 ************************************ 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:34.520 * Looking for test storage... 00:06:34.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:34.520 03:56:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.520 03:56:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.520 03:56:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.520 03:56:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.520 03:56:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.520 03:56:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.520 03:56:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.520 03:56:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.520 03:56:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.520 03:56:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.520 03:56:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.520 03:56:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:34.520 03:56:16 rpc -- scripts/common.sh@345 -- # : 1 00:06:34.520 03:56:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.520 03:56:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.520 03:56:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:34.520 03:56:16 rpc -- scripts/common.sh@353 -- # local d=1 00:06:34.520 03:56:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.520 03:56:16 rpc -- scripts/common.sh@355 -- # echo 1 00:06:34.520 03:56:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.520 03:56:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:34.520 03:56:16 rpc -- scripts/common.sh@353 -- # local d=2 00:06:34.520 03:56:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.520 03:56:16 rpc -- scripts/common.sh@355 -- # echo 2 00:06:34.520 03:56:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.520 03:56:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.520 03:56:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.520 03:56:16 rpc -- scripts/common.sh@368 -- # return 0 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:34.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.520 --rc genhtml_branch_coverage=1 00:06:34.520 --rc genhtml_function_coverage=1 00:06:34.520 --rc genhtml_legend=1 00:06:34.520 --rc geninfo_all_blocks=1 00:06:34.520 --rc geninfo_unexecuted_blocks=1 00:06:34.520 00:06:34.520 ' 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:34.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.520 --rc genhtml_branch_coverage=1 00:06:34.520 --rc genhtml_function_coverage=1 00:06:34.520 --rc genhtml_legend=1 00:06:34.520 --rc geninfo_all_blocks=1 00:06:34.520 --rc geninfo_unexecuted_blocks=1 00:06:34.520 00:06:34.520 ' 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:34.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.520 --rc genhtml_branch_coverage=1 00:06:34.520 --rc genhtml_function_coverage=1 00:06:34.520 --rc genhtml_legend=1 00:06:34.520 --rc geninfo_all_blocks=1 00:06:34.520 --rc geninfo_unexecuted_blocks=1 00:06:34.520 00:06:34.520 ' 00:06:34.520 03:56:16 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:34.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.520 --rc genhtml_branch_coverage=1 00:06:34.520 --rc genhtml_function_coverage=1 00:06:34.520 --rc genhtml_legend=1 00:06:34.520 --rc geninfo_all_blocks=1 00:06:34.521 --rc geninfo_unexecuted_blocks=1 00:06:34.521 00:06:34.521 ' 00:06:34.521 03:56:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56904 00:06:34.521 03:56:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:34.521 03:56:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.521 03:56:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56904 00:06:34.521 03:56:16 rpc -- common/autotest_common.sh@835 -- # '[' -z 56904 ']' 00:06:34.521 03:56:16 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.521 03:56:16 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.521 03:56:16 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.521 03:56:16 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.521 03:56:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.779 [2024-12-09 03:56:16.538870] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:06:34.779 [2024-12-09 03:56:16.539013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56904 ] 00:06:34.779 [2024-12-09 03:56:16.685422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.036 [2024-12-09 03:56:16.763807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:35.036 [2024-12-09 03:56:16.763894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56904' to capture a snapshot of events at runtime. 00:06:35.036 [2024-12-09 03:56:16.763920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.036 [2024-12-09 03:56:16.763929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.036 [2024-12-09 03:56:16.763935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56904 for offline analysis/debug. 00:06:35.036 [2024-12-09 03:56:16.764490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.036 [2024-12-09 03:56:16.858297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.602 03:56:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.602 03:56:17 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:35.602 03:56:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:35.602 03:56:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:35.602 03:56:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:35.602 03:56:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:35.602 03:56:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.602 03:56:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.602 03:56:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.602 ************************************ 00:06:35.602 START TEST rpc_integrity 00:06:35.602 ************************************ 00:06:35.602 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:35.602 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:35.602 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.602 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:35.861 { 00:06:35.861 "name": "Malloc0", 00:06:35.861 "aliases": [ 00:06:35.861 "55bd63ff-510d-462e-aba8-70eb217ea941" 00:06:35.861 ], 00:06:35.861 "product_name": "Malloc disk", 00:06:35.861 "block_size": 512, 00:06:35.861 "num_blocks": 16384, 00:06:35.861 "uuid": "55bd63ff-510d-462e-aba8-70eb217ea941", 00:06:35.861 "assigned_rate_limits": { 00:06:35.861 "rw_ios_per_sec": 0, 00:06:35.861 "rw_mbytes_per_sec": 0, 00:06:35.861 "r_mbytes_per_sec": 0, 00:06:35.861 "w_mbytes_per_sec": 0 00:06:35.861 }, 00:06:35.861 "claimed": false, 00:06:35.861 "zoned": false, 00:06:35.861 "supported_io_types": { 00:06:35.861 "read": true, 00:06:35.861 "write": true, 00:06:35.861 "unmap": true, 00:06:35.861 "flush": true, 00:06:35.861 "reset": true, 00:06:35.861 "nvme_admin": false, 00:06:35.861 "nvme_io": false, 00:06:35.861 "nvme_io_md": false, 00:06:35.861 "write_zeroes": true, 00:06:35.861 "zcopy": true, 00:06:35.861 "get_zone_info": false, 00:06:35.861 "zone_management": false, 00:06:35.861 "zone_append": false, 00:06:35.861 "compare": false, 00:06:35.861 "compare_and_write": false, 00:06:35.861 "abort": true, 00:06:35.861 "seek_hole": false, 00:06:35.861 "seek_data": false, 00:06:35.861 "copy": true, 00:06:35.861 "nvme_iov_md": false 00:06:35.861 }, 00:06:35.861 "memory_domains": [ 00:06:35.861 { 00:06:35.861 "dma_device_id": "system", 00:06:35.861 "dma_device_type": 1 00:06:35.861 }, 00:06:35.861 { 00:06:35.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.861 "dma_device_type": 2 00:06:35.861 } 00:06:35.861 ], 00:06:35.861 "driver_specific": {} 00:06:35.861 } 00:06:35.861 ]' 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.861 [2024-12-09 03:56:17.701021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:35.861 [2024-12-09 03:56:17.701093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.861 [2024-12-09 03:56:17.701117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2021cb0 00:06:35.861 [2024-12-09 03:56:17.701127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.861 [2024-12-09 03:56:17.703099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.861 [2024-12-09 03:56:17.703148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:35.861 Passthru0 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.861 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.861 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:35.861 { 00:06:35.861 "name": "Malloc0", 00:06:35.861 "aliases": [ 00:06:35.861 "55bd63ff-510d-462e-aba8-70eb217ea941" 00:06:35.861 ], 00:06:35.861 "product_name": "Malloc disk", 00:06:35.861 "block_size": 512, 00:06:35.861 "num_blocks": 16384, 00:06:35.861 "uuid": "55bd63ff-510d-462e-aba8-70eb217ea941", 00:06:35.861 "assigned_rate_limits": { 00:06:35.861 "rw_ios_per_sec": 0, 00:06:35.861 "rw_mbytes_per_sec": 0, 00:06:35.861 "r_mbytes_per_sec": 0, 00:06:35.861 "w_mbytes_per_sec": 0 00:06:35.861 }, 00:06:35.861 "claimed": true, 00:06:35.861 "claim_type": "exclusive_write", 00:06:35.861 "zoned": false, 00:06:35.861 "supported_io_types": { 00:06:35.861 "read": true, 00:06:35.861 "write": true, 00:06:35.861 "unmap": true, 00:06:35.861 "flush": true, 00:06:35.861 "reset": true, 00:06:35.861 "nvme_admin": false, 00:06:35.861 "nvme_io": false, 00:06:35.861 "nvme_io_md": false, 00:06:35.861 "write_zeroes": true, 00:06:35.861 "zcopy": true, 00:06:35.861 "get_zone_info": false, 00:06:35.861 "zone_management": false, 00:06:35.861 "zone_append": false, 00:06:35.861 "compare": false, 00:06:35.861 "compare_and_write": false, 00:06:35.861 "abort": true, 00:06:35.861 "seek_hole": false, 00:06:35.861 "seek_data": false, 00:06:35.861 "copy": true, 00:06:35.861 "nvme_iov_md": false 00:06:35.861 }, 00:06:35.861 "memory_domains": [ 00:06:35.861 { 00:06:35.861 "dma_device_id": "system", 00:06:35.861 "dma_device_type": 1 00:06:35.861 }, 00:06:35.861 { 00:06:35.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.861 "dma_device_type": 2 00:06:35.861 } 00:06:35.861 ], 00:06:35.861 "driver_specific": {} 00:06:35.861 }, 00:06:35.861 { 00:06:35.861 "name": "Passthru0", 00:06:35.861 "aliases": [ 00:06:35.861 "3bb4c04a-7f67-5e85-b2b5-58eca76fd4c3" 00:06:35.861 ], 00:06:35.861 "product_name": "passthru", 00:06:35.861 "block_size": 512, 00:06:35.861 "num_blocks": 16384, 00:06:35.861 "uuid": "3bb4c04a-7f67-5e85-b2b5-58eca76fd4c3", 00:06:35.861 "assigned_rate_limits": { 00:06:35.861 "rw_ios_per_sec": 0, 00:06:35.861 "rw_mbytes_per_sec": 0, 00:06:35.861 "r_mbytes_per_sec": 0, 00:06:35.861 "w_mbytes_per_sec": 0 00:06:35.861 }, 00:06:35.861 "claimed": false, 00:06:35.861 "zoned": false, 00:06:35.861 "supported_io_types": { 00:06:35.861 "read": true, 00:06:35.861 "write": true, 00:06:35.861 "unmap": true, 00:06:35.861 "flush": true, 00:06:35.861 "reset": true, 00:06:35.861 "nvme_admin": false, 00:06:35.862 "nvme_io": false, 00:06:35.862 "nvme_io_md": false, 00:06:35.862 "write_zeroes": true, 00:06:35.862 "zcopy": true, 00:06:35.862 "get_zone_info": false, 00:06:35.862 "zone_management": false, 00:06:35.862 "zone_append": false, 00:06:35.862 "compare": false, 00:06:35.862 "compare_and_write": false, 00:06:35.862 "abort": true, 00:06:35.862 "seek_hole": false, 00:06:35.862 "seek_data": false, 00:06:35.862 "copy": true, 00:06:35.862 "nvme_iov_md": false 00:06:35.862 }, 00:06:35.862 "memory_domains": [ 00:06:35.862 { 00:06:35.862 "dma_device_id": "system", 00:06:35.862 "dma_device_type": 1 00:06:35.862 }, 00:06:35.862 { 00:06:35.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.862 "dma_device_type": 2 00:06:35.862 } 00:06:35.862 ], 00:06:35.862 "driver_specific": { 00:06:35.862 "passthru": { 00:06:35.862 "name": "Passthru0", 00:06:35.862 "base_bdev_name": "Malloc0" 00:06:35.862 } 00:06:35.862 } 00:06:35.862 } 00:06:35.862 ]' 00:06:35.862 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:35.862 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:35.862 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:35.862 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.862 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.862 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.862 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:35.862 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.862 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.120 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.120 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:36.120 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.120 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.120 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.120 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:36.120 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:36.120 03:56:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:36.120 00:06:36.120 real 0m0.338s 00:06:36.120 user 0m0.222s 00:06:36.120 sys 0m0.048s 00:06:36.120 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.120 03:56:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.120 ************************************ 00:06:36.120 END TEST rpc_integrity 00:06:36.120 ************************************ 00:06:36.120 03:56:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:36.120 03:56:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.120 03:56:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.120 03:56:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.120 ************************************ 00:06:36.120 START TEST rpc_plugins 00:06:36.120 ************************************ 00:06:36.120 03:56:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:36.120 03:56:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:36.120 03:56:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.120 03:56:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.120 03:56:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.120 03:56:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:36.120 03:56:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:36.120 03:56:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.120 03:56:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.120 03:56:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.120 03:56:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:36.120 { 00:06:36.120 "name": "Malloc1", 00:06:36.120 "aliases": [ 00:06:36.120 "82c1b099-bb10-424a-bd80-f6eaa4431653" 00:06:36.120 ], 00:06:36.120 "product_name": "Malloc disk", 00:06:36.120 "block_size": 4096, 00:06:36.120 "num_blocks": 256, 00:06:36.120 "uuid": "82c1b099-bb10-424a-bd80-f6eaa4431653", 00:06:36.120 "assigned_rate_limits": { 00:06:36.120 "rw_ios_per_sec": 0, 00:06:36.120 "rw_mbytes_per_sec": 0, 00:06:36.120 "r_mbytes_per_sec": 0, 00:06:36.120 "w_mbytes_per_sec": 0 00:06:36.120 }, 00:06:36.120 "claimed": false, 00:06:36.120 "zoned": false, 00:06:36.120 "supported_io_types": { 00:06:36.120 "read": true, 00:06:36.120 "write": true, 00:06:36.120 "unmap": true, 00:06:36.120 "flush": true, 00:06:36.120 "reset": true, 00:06:36.120 "nvme_admin": false, 00:06:36.120 "nvme_io": false, 00:06:36.120 "nvme_io_md": false, 00:06:36.120 "write_zeroes": true, 00:06:36.120 "zcopy": true, 00:06:36.120 "get_zone_info": false, 00:06:36.120 "zone_management": false, 00:06:36.120 "zone_append": false, 00:06:36.120 "compare": false, 00:06:36.120 "compare_and_write": false, 00:06:36.120 "abort": true, 00:06:36.120 "seek_hole": false, 00:06:36.120 "seek_data": false, 00:06:36.120 "copy": true, 00:06:36.121 "nvme_iov_md": false 00:06:36.121 }, 00:06:36.121 "memory_domains": [ 00:06:36.121 { 00:06:36.121 "dma_device_id": "system", 00:06:36.121 "dma_device_type": 1 00:06:36.121 }, 00:06:36.121 { 00:06:36.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.121 "dma_device_type": 2 00:06:36.121 } 00:06:36.121 ], 00:06:36.121 "driver_specific": {} 00:06:36.121 } 00:06:36.121 ]' 00:06:36.121 03:56:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:36.121 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:36.121 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:36.121 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.121 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.121 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.121 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:36.121 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.121 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.121 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.121 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:36.121 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:36.379 03:56:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:36.379 00:06:36.379 real 0m0.158s 00:06:36.379 user 0m0.093s 00:06:36.379 sys 0m0.025s 00:06:36.379 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.379 ************************************ 00:06:36.379 END TEST rpc_plugins 00:06:36.379 03:56:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.379 ************************************ 00:06:36.379 03:56:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:36.379 03:56:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.379 03:56:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.379 03:56:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.379 ************************************ 00:06:36.379 START TEST rpc_trace_cmd_test 00:06:36.379 ************************************ 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:36.379 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56904", 00:06:36.379 "tpoint_group_mask": "0x8", 00:06:36.379 "iscsi_conn": { 00:06:36.379 "mask": "0x2", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "scsi": { 00:06:36.379 "mask": "0x4", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "bdev": { 00:06:36.379 "mask": "0x8", 00:06:36.379 "tpoint_mask": "0xffffffffffffffff" 00:06:36.379 }, 00:06:36.379 "nvmf_rdma": { 00:06:36.379 "mask": "0x10", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "nvmf_tcp": { 00:06:36.379 "mask": "0x20", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "ftl": { 00:06:36.379 "mask": "0x40", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "blobfs": { 00:06:36.379 "mask": "0x80", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "dsa": { 00:06:36.379 "mask": "0x200", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "thread": { 00:06:36.379 "mask": "0x400", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "nvme_pcie": { 00:06:36.379 "mask": "0x800", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "iaa": { 00:06:36.379 "mask": "0x1000", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "nvme_tcp": { 00:06:36.379 "mask": "0x2000", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "bdev_nvme": { 00:06:36.379 "mask": "0x4000", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "sock": { 00:06:36.379 "mask": "0x8000", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "blob": { 00:06:36.379 "mask": "0x10000", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "bdev_raid": { 00:06:36.379 "mask": "0x20000", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 }, 00:06:36.379 "scheduler": { 00:06:36.379 "mask": "0x40000", 00:06:36.379 "tpoint_mask": "0x0" 00:06:36.379 } 00:06:36.379 }' 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:36.379 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:36.688 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:36.688 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:36.688 03:56:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:36.688 00:06:36.688 real 0m0.285s 00:06:36.688 user 0m0.245s 00:06:36.688 sys 0m0.030s 00:06:36.688 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.688 ************************************ 00:06:36.688 03:56:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.688 END TEST rpc_trace_cmd_test 00:06:36.688 ************************************ 00:06:36.688 03:56:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:36.688 03:56:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:36.688 03:56:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:36.688 03:56:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.688 03:56:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.688 03:56:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.688 ************************************ 00:06:36.688 START TEST rpc_daemon_integrity 00:06:36.688 ************************************ 00:06:36.688 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:36.688 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:36.688 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.688 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.688 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.688 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:36.689 { 00:06:36.689 "name": "Malloc2", 00:06:36.689 "aliases": [ 00:06:36.689 "5053ceb3-ad88-4d39-8dd6-435b0043c857" 00:06:36.689 ], 00:06:36.689 "product_name": "Malloc disk", 00:06:36.689 "block_size": 512, 00:06:36.689 "num_blocks": 16384, 00:06:36.689 "uuid": "5053ceb3-ad88-4d39-8dd6-435b0043c857", 00:06:36.689 "assigned_rate_limits": { 00:06:36.689 "rw_ios_per_sec": 0, 00:06:36.689 "rw_mbytes_per_sec": 0, 00:06:36.689 "r_mbytes_per_sec": 0, 00:06:36.689 "w_mbytes_per_sec": 0 00:06:36.689 }, 00:06:36.689 "claimed": false, 00:06:36.689 "zoned": false, 00:06:36.689 "supported_io_types": { 00:06:36.689 "read": true, 00:06:36.689 "write": true, 00:06:36.689 "unmap": true, 00:06:36.689 "flush": true, 00:06:36.689 "reset": true, 00:06:36.689 "nvme_admin": false, 00:06:36.689 "nvme_io": false, 00:06:36.689 "nvme_io_md": false, 00:06:36.689 "write_zeroes": true, 00:06:36.689 "zcopy": true, 00:06:36.689 "get_zone_info": false, 00:06:36.689 "zone_management": false, 00:06:36.689 "zone_append": false, 00:06:36.689 "compare": false, 00:06:36.689 "compare_and_write": false, 00:06:36.689 "abort": true, 00:06:36.689 "seek_hole": false, 00:06:36.689 "seek_data": false, 00:06:36.689 "copy": true, 00:06:36.689 "nvme_iov_md": false 00:06:36.689 }, 00:06:36.689 "memory_domains": [ 00:06:36.689 { 00:06:36.689 "dma_device_id": "system", 00:06:36.689 "dma_device_type": 1 00:06:36.689 }, 00:06:36.689 { 00:06:36.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.689 "dma_device_type": 2 00:06:36.689 } 00:06:36.689 ], 00:06:36.689 "driver_specific": {} 00:06:36.689 } 00:06:36.689 ]' 00:06:36.689 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.955 [2024-12-09 03:56:18.643721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:36.955 [2024-12-09 03:56:18.643813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.955 [2024-12-09 03:56:18.643835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b6430 00:06:36.955 [2024-12-09 03:56:18.643845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.955 [2024-12-09 03:56:18.645920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.955 [2024-12-09 03:56:18.645968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:36.955 Passthru0 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:36.955 { 00:06:36.955 "name": "Malloc2", 00:06:36.955 "aliases": [ 00:06:36.955 "5053ceb3-ad88-4d39-8dd6-435b0043c857" 00:06:36.955 ], 00:06:36.955 "product_name": "Malloc disk", 00:06:36.955 "block_size": 512, 00:06:36.955 "num_blocks": 16384, 00:06:36.955 "uuid": "5053ceb3-ad88-4d39-8dd6-435b0043c857", 00:06:36.955 "assigned_rate_limits": { 00:06:36.955 "rw_ios_per_sec": 0, 00:06:36.955 "rw_mbytes_per_sec": 0, 00:06:36.955 "r_mbytes_per_sec": 0, 00:06:36.955 "w_mbytes_per_sec": 0 00:06:36.955 }, 00:06:36.955 "claimed": true, 00:06:36.955 "claim_type": "exclusive_write", 00:06:36.955 "zoned": false, 00:06:36.955 "supported_io_types": { 00:06:36.955 "read": true, 00:06:36.955 "write": true, 00:06:36.955 "unmap": true, 00:06:36.955 "flush": true, 00:06:36.955 "reset": true, 00:06:36.955 "nvme_admin": false, 00:06:36.955 "nvme_io": false, 00:06:36.955 "nvme_io_md": false, 00:06:36.955 "write_zeroes": true, 00:06:36.955 "zcopy": true, 00:06:36.955 "get_zone_info": false, 00:06:36.955 "zone_management": false, 00:06:36.955 "zone_append": false, 00:06:36.955 "compare": false, 00:06:36.955 "compare_and_write": false, 00:06:36.955 "abort": true, 00:06:36.955 "seek_hole": false, 00:06:36.955 "seek_data": false, 00:06:36.955 "copy": true, 00:06:36.955 "nvme_iov_md": false 00:06:36.955 }, 00:06:36.955 "memory_domains": [ 00:06:36.955 { 00:06:36.955 "dma_device_id": "system", 00:06:36.955 "dma_device_type": 1 00:06:36.955 }, 00:06:36.955 { 00:06:36.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.955 "dma_device_type": 2 00:06:36.955 } 00:06:36.955 ], 00:06:36.955 "driver_specific": {} 00:06:36.955 }, 00:06:36.955 { 00:06:36.955 "name": "Passthru0", 00:06:36.955 "aliases": [ 00:06:36.955 "d6ef9b5e-2fb0-5e0c-91a8-47f93e8a6827" 00:06:36.955 ], 00:06:36.955 "product_name": "passthru", 00:06:36.955 "block_size": 512, 00:06:36.955 "num_blocks": 16384, 00:06:36.955 "uuid": "d6ef9b5e-2fb0-5e0c-91a8-47f93e8a6827", 00:06:36.955 "assigned_rate_limits": { 00:06:36.955 "rw_ios_per_sec": 0, 00:06:36.955 "rw_mbytes_per_sec": 0, 00:06:36.955 "r_mbytes_per_sec": 0, 00:06:36.955 "w_mbytes_per_sec": 0 00:06:36.955 }, 00:06:36.955 "claimed": false, 00:06:36.955 "zoned": false, 00:06:36.955 "supported_io_types": { 00:06:36.955 "read": true, 00:06:36.955 "write": true, 00:06:36.955 "unmap": true, 00:06:36.955 "flush": true, 00:06:36.955 "reset": true, 00:06:36.955 "nvme_admin": false, 00:06:36.955 "nvme_io": false, 00:06:36.955 "nvme_io_md": false, 00:06:36.955 "write_zeroes": true, 00:06:36.955 "zcopy": true, 00:06:36.955 "get_zone_info": false, 00:06:36.955 "zone_management": false, 00:06:36.955 "zone_append": false, 00:06:36.955 "compare": false, 00:06:36.955 "compare_and_write": false, 00:06:36.955 "abort": true, 00:06:36.955 "seek_hole": false, 00:06:36.955 "seek_data": false, 00:06:36.955 "copy": true, 00:06:36.955 "nvme_iov_md": false 00:06:36.955 }, 00:06:36.955 "memory_domains": [ 00:06:36.955 { 00:06:36.955 "dma_device_id": "system", 00:06:36.955 "dma_device_type": 1 00:06:36.955 }, 00:06:36.955 { 00:06:36.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.955 "dma_device_type": 2 00:06:36.955 } 00:06:36.955 ], 00:06:36.955 "driver_specific": { 00:06:36.955 "passthru": { 00:06:36.955 "name": "Passthru0", 00:06:36.955 "base_bdev_name": "Malloc2" 00:06:36.955 } 00:06:36.955 } 00:06:36.955 } 00:06:36.955 ]' 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:36.955 03:56:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:36.956 00:06:36.956 real 0m0.332s 00:06:36.956 user 0m0.221s 00:06:36.956 sys 0m0.041s 00:06:36.956 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.956 ************************************ 00:06:36.956 END TEST rpc_daemon_integrity 00:06:36.956 ************************************ 00:06:36.956 03:56:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.956 03:56:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:36.956 03:56:18 rpc -- rpc/rpc.sh@84 -- # killprocess 56904 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 56904 ']' 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@958 -- # kill -0 56904 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@959 -- # uname 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56904 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.956 killing process with pid 56904 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56904' 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@973 -- # kill 56904 00:06:36.956 03:56:18 rpc -- common/autotest_common.sh@978 -- # wait 56904 00:06:37.520 00:06:37.520 real 0m3.139s 00:06:37.520 user 0m3.897s 00:06:37.520 sys 0m0.832s 00:06:37.520 03:56:19 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.520 03:56:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.520 ************************************ 00:06:37.520 END TEST rpc 00:06:37.520 ************************************ 00:06:37.520 03:56:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:37.520 03:56:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.520 03:56:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.520 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.520 ************************************ 00:06:37.520 START TEST skip_rpc 00:06:37.520 ************************************ 00:06:37.520 03:56:19 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:37.779 * Looking for test storage... 00:06:37.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.779 03:56:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.779 --rc genhtml_branch_coverage=1 00:06:37.779 --rc genhtml_function_coverage=1 00:06:37.779 --rc genhtml_legend=1 00:06:37.779 --rc geninfo_all_blocks=1 00:06:37.779 --rc geninfo_unexecuted_blocks=1 00:06:37.779 00:06:37.779 ' 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.779 --rc genhtml_branch_coverage=1 00:06:37.779 --rc genhtml_function_coverage=1 00:06:37.779 --rc genhtml_legend=1 00:06:37.779 --rc geninfo_all_blocks=1 00:06:37.779 --rc geninfo_unexecuted_blocks=1 00:06:37.779 00:06:37.779 ' 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.779 --rc genhtml_branch_coverage=1 00:06:37.779 --rc genhtml_function_coverage=1 00:06:37.779 --rc genhtml_legend=1 00:06:37.779 --rc geninfo_all_blocks=1 00:06:37.779 --rc geninfo_unexecuted_blocks=1 00:06:37.779 00:06:37.779 ' 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.779 --rc genhtml_branch_coverage=1 00:06:37.779 --rc genhtml_function_coverage=1 00:06:37.779 --rc genhtml_legend=1 00:06:37.779 --rc geninfo_all_blocks=1 00:06:37.779 --rc geninfo_unexecuted_blocks=1 00:06:37.779 00:06:37.779 ' 00:06:37.779 03:56:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:37.779 03:56:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:37.779 03:56:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.779 03:56:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.779 ************************************ 00:06:37.779 START TEST skip_rpc 00:06:37.779 ************************************ 00:06:37.779 03:56:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:37.779 03:56:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57110 00:06:37.779 03:56:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.779 03:56:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:37.779 03:56:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:38.037 [2024-12-09 03:56:19.736849] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:06:38.037 [2024-12-09 03:56:19.737738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57110 ] 00:06:38.037 [2024-12-09 03:56:19.881291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.037 [2024-12-09 03:56:19.969019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.295 [2024-12-09 03:56:20.080528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57110 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57110 ']' 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57110 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57110 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.560 killing process with pid 57110 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57110' 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57110 00:06:43.560 03:56:24 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57110 00:06:43.560 00:06:43.560 real 0m5.574s 00:06:43.560 user 0m5.092s 00:06:43.560 sys 0m0.394s 00:06:43.560 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.560 03:56:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.560 ************************************ 00:06:43.560 END TEST skip_rpc 00:06:43.560 ************************************ 00:06:43.560 03:56:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:43.560 03:56:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.560 03:56:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.560 03:56:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.560 ************************************ 00:06:43.560 START TEST skip_rpc_with_json 00:06:43.560 ************************************ 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57196 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57196 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57196 ']' 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.560 03:56:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.560 [2024-12-09 03:56:25.366599] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:06:43.560 [2024-12-09 03:56:25.366720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57196 ] 00:06:43.877 [2024-12-09 03:56:25.521083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.877 [2024-12-09 03:56:25.612519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.877 [2024-12-09 03:56:25.710102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.444 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.444 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:44.444 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:44.444 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.444 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.703 [2024-12-09 03:56:26.394602] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:44.703 request: 00:06:44.703 { 00:06:44.703 "trtype": "tcp", 00:06:44.703 "method": "nvmf_get_transports", 00:06:44.703 "req_id": 1 00:06:44.703 } 00:06:44.703 Got JSON-RPC error response 00:06:44.703 response: 00:06:44.703 { 00:06:44.703 "code": -19, 00:06:44.703 "message": "No such device" 00:06:44.703 } 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.703 [2024-12-09 03:56:26.406711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.703 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:44.703 { 00:06:44.703 "subsystems": [ 00:06:44.703 { 00:06:44.703 "subsystem": "fsdev", 00:06:44.703 "config": [ 00:06:44.703 { 00:06:44.703 "method": "fsdev_set_opts", 00:06:44.703 "params": { 00:06:44.703 "fsdev_io_pool_size": 65535, 00:06:44.703 "fsdev_io_cache_size": 256 00:06:44.703 } 00:06:44.703 } 00:06:44.703 ] 00:06:44.703 }, 00:06:44.703 { 00:06:44.703 "subsystem": "keyring", 00:06:44.703 "config": [] 00:06:44.703 }, 00:06:44.703 { 00:06:44.703 "subsystem": "iobuf", 00:06:44.703 "config": [ 00:06:44.703 { 00:06:44.703 "method": "iobuf_set_options", 00:06:44.703 "params": { 00:06:44.703 "small_pool_count": 8192, 00:06:44.703 "large_pool_count": 1024, 00:06:44.703 "small_bufsize": 8192, 00:06:44.703 "large_bufsize": 135168, 00:06:44.703 "enable_numa": false 00:06:44.703 } 00:06:44.703 } 00:06:44.703 ] 00:06:44.703 }, 00:06:44.703 { 00:06:44.703 "subsystem": "sock", 00:06:44.703 "config": [ 00:06:44.703 { 00:06:44.703 "method": "sock_set_default_impl", 00:06:44.703 "params": { 00:06:44.703 "impl_name": "uring" 00:06:44.703 } 00:06:44.703 }, 00:06:44.703 { 00:06:44.703 "method": "sock_impl_set_options", 00:06:44.703 "params": { 00:06:44.703 "impl_name": "ssl", 00:06:44.703 "recv_buf_size": 4096, 00:06:44.703 "send_buf_size": 4096, 00:06:44.703 "enable_recv_pipe": true, 00:06:44.703 "enable_quickack": false, 00:06:44.703 "enable_placement_id": 0, 00:06:44.703 "enable_zerocopy_send_server": true, 00:06:44.703 "enable_zerocopy_send_client": false, 00:06:44.703 "zerocopy_threshold": 0, 00:06:44.703 "tls_version": 0, 00:06:44.703 "enable_ktls": false 00:06:44.703 } 00:06:44.703 }, 00:06:44.703 { 00:06:44.703 "method": "sock_impl_set_options", 00:06:44.703 "params": { 00:06:44.703 "impl_name": "posix", 00:06:44.703 "recv_buf_size": 2097152, 00:06:44.703 "send_buf_size": 2097152, 00:06:44.703 "enable_recv_pipe": true, 00:06:44.703 "enable_quickack": false, 00:06:44.703 "enable_placement_id": 0, 00:06:44.703 "enable_zerocopy_send_server": true, 00:06:44.703 "enable_zerocopy_send_client": false, 00:06:44.703 "zerocopy_threshold": 0, 00:06:44.703 "tls_version": 0, 00:06:44.703 "enable_ktls": false 00:06:44.703 } 00:06:44.703 }, 00:06:44.703 { 00:06:44.703 "method": "sock_impl_set_options", 00:06:44.703 "params": { 00:06:44.703 "impl_name": "uring", 00:06:44.703 "recv_buf_size": 2097152, 00:06:44.703 "send_buf_size": 2097152, 00:06:44.703 "enable_recv_pipe": true, 00:06:44.703 "enable_quickack": false, 00:06:44.703 "enable_placement_id": 0, 00:06:44.703 "enable_zerocopy_send_server": false, 00:06:44.703 "enable_zerocopy_send_client": false, 00:06:44.703 "zerocopy_threshold": 0, 00:06:44.704 "tls_version": 0, 00:06:44.704 "enable_ktls": false 00:06:44.704 } 00:06:44.704 } 00:06:44.704 ] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "vmd", 00:06:44.704 "config": [] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "accel", 00:06:44.704 "config": [ 00:06:44.704 { 00:06:44.704 "method": "accel_set_options", 00:06:44.704 "params": { 00:06:44.704 "small_cache_size": 128, 00:06:44.704 "large_cache_size": 16, 00:06:44.704 "task_count": 2048, 00:06:44.704 "sequence_count": 2048, 00:06:44.704 "buf_count": 2048 00:06:44.704 } 00:06:44.704 } 00:06:44.704 ] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "bdev", 00:06:44.704 "config": [ 00:06:44.704 { 00:06:44.704 "method": "bdev_set_options", 00:06:44.704 "params": { 00:06:44.704 "bdev_io_pool_size": 65535, 00:06:44.704 "bdev_io_cache_size": 256, 00:06:44.704 "bdev_auto_examine": true, 00:06:44.704 "iobuf_small_cache_size": 128, 00:06:44.704 "iobuf_large_cache_size": 16 00:06:44.704 } 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "method": "bdev_raid_set_options", 00:06:44.704 "params": { 00:06:44.704 "process_window_size_kb": 1024, 00:06:44.704 "process_max_bandwidth_mb_sec": 0 00:06:44.704 } 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "method": "bdev_iscsi_set_options", 00:06:44.704 "params": { 00:06:44.704 "timeout_sec": 30 00:06:44.704 } 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "method": "bdev_nvme_set_options", 00:06:44.704 "params": { 00:06:44.704 "action_on_timeout": "none", 00:06:44.704 "timeout_us": 0, 00:06:44.704 "timeout_admin_us": 0, 00:06:44.704 "keep_alive_timeout_ms": 10000, 00:06:44.704 "arbitration_burst": 0, 00:06:44.704 "low_priority_weight": 0, 00:06:44.704 "medium_priority_weight": 0, 00:06:44.704 "high_priority_weight": 0, 00:06:44.704 "nvme_adminq_poll_period_us": 10000, 00:06:44.704 "nvme_ioq_poll_period_us": 0, 00:06:44.704 "io_queue_requests": 0, 00:06:44.704 "delay_cmd_submit": true, 00:06:44.704 "transport_retry_count": 4, 00:06:44.704 "bdev_retry_count": 3, 00:06:44.704 "transport_ack_timeout": 0, 00:06:44.704 "ctrlr_loss_timeout_sec": 0, 00:06:44.704 "reconnect_delay_sec": 0, 00:06:44.704 "fast_io_fail_timeout_sec": 0, 00:06:44.704 "disable_auto_failback": false, 00:06:44.704 "generate_uuids": false, 00:06:44.704 "transport_tos": 0, 00:06:44.704 "nvme_error_stat": false, 00:06:44.704 "rdma_srq_size": 0, 00:06:44.704 "io_path_stat": false, 00:06:44.704 "allow_accel_sequence": false, 00:06:44.704 "rdma_max_cq_size": 0, 00:06:44.704 "rdma_cm_event_timeout_ms": 0, 00:06:44.704 "dhchap_digests": [ 00:06:44.704 "sha256", 00:06:44.704 "sha384", 00:06:44.704 "sha512" 00:06:44.704 ], 00:06:44.704 "dhchap_dhgroups": [ 00:06:44.704 "null", 00:06:44.704 "ffdhe2048", 00:06:44.704 "ffdhe3072", 00:06:44.704 "ffdhe4096", 00:06:44.704 "ffdhe6144", 00:06:44.704 "ffdhe8192" 00:06:44.704 ] 00:06:44.704 } 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "method": "bdev_nvme_set_hotplug", 00:06:44.704 "params": { 00:06:44.704 "period_us": 100000, 00:06:44.704 "enable": false 00:06:44.704 } 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "method": "bdev_wait_for_examine" 00:06:44.704 } 00:06:44.704 ] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "scsi", 00:06:44.704 "config": null 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "scheduler", 00:06:44.704 "config": [ 00:06:44.704 { 00:06:44.704 "method": "framework_set_scheduler", 00:06:44.704 "params": { 00:06:44.704 "name": "static" 00:06:44.704 } 00:06:44.704 } 00:06:44.704 ] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "vhost_scsi", 00:06:44.704 "config": [] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "vhost_blk", 00:06:44.704 "config": [] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "ublk", 00:06:44.704 "config": [] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "nbd", 00:06:44.704 "config": [] 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "subsystem": "nvmf", 00:06:44.704 "config": [ 00:06:44.704 { 00:06:44.704 "method": "nvmf_set_config", 00:06:44.704 "params": { 00:06:44.704 "discovery_filter": "match_any", 00:06:44.704 "admin_cmd_passthru": { 00:06:44.704 "identify_ctrlr": false 00:06:44.704 }, 00:06:44.704 "dhchap_digests": [ 00:06:44.704 "sha256", 00:06:44.704 "sha384", 00:06:44.704 "sha512" 00:06:44.704 ], 00:06:44.704 "dhchap_dhgroups": [ 00:06:44.704 "null", 00:06:44.704 "ffdhe2048", 00:06:44.704 "ffdhe3072", 00:06:44.704 "ffdhe4096", 00:06:44.704 "ffdhe6144", 00:06:44.704 "ffdhe8192" 00:06:44.704 ] 00:06:44.704 } 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "method": "nvmf_set_max_subsystems", 00:06:44.704 "params": { 00:06:44.704 "max_subsystems": 1024 00:06:44.704 } 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "method": "nvmf_set_crdt", 00:06:44.704 "params": { 00:06:44.704 "crdt1": 0, 00:06:44.704 "crdt2": 0, 00:06:44.704 "crdt3": 0 00:06:44.704 } 00:06:44.704 }, 00:06:44.704 { 00:06:44.704 "method": "nvmf_create_transport", 00:06:44.704 "params": { 00:06:44.704 "trtype": "TCP", 00:06:44.704 "max_queue_depth": 128, 00:06:44.704 "max_io_qpairs_per_ctrlr": 127, 00:06:44.704 "in_capsule_data_size": 4096, 00:06:44.704 "max_io_size": 131072, 00:06:44.704 "io_unit_size": 131072, 00:06:44.704 "max_aq_depth": 128, 00:06:44.704 "num_shared_buffers": 511, 00:06:44.704 "buf_cache_size": 4294967295, 00:06:44.704 "dif_insert_or_strip": false, 00:06:44.704 "zcopy": false, 00:06:44.704 "c2h_success": true, 00:06:44.704 "sock_priority": 0, 00:06:44.704 "abort_timeout_sec": 1, 00:06:44.704 "ack_timeout": 0, 00:06:44.705 "data_wr_pool_size": 0 00:06:44.705 } 00:06:44.705 } 00:06:44.705 ] 00:06:44.705 }, 00:06:44.705 { 00:06:44.705 "subsystem": "iscsi", 00:06:44.705 "config": [ 00:06:44.705 { 00:06:44.705 "method": "iscsi_set_options", 00:06:44.705 "params": { 00:06:44.705 "node_base": "iqn.2016-06.io.spdk", 00:06:44.705 "max_sessions": 128, 00:06:44.705 "max_connections_per_session": 2, 00:06:44.705 "max_queue_depth": 64, 00:06:44.705 "default_time2wait": 2, 00:06:44.705 "default_time2retain": 20, 00:06:44.705 "first_burst_length": 8192, 00:06:44.705 "immediate_data": true, 00:06:44.705 "allow_duplicated_isid": false, 00:06:44.705 "error_recovery_level": 0, 00:06:44.705 "nop_timeout": 60, 00:06:44.705 "nop_in_interval": 30, 00:06:44.705 "disable_chap": false, 00:06:44.705 "require_chap": false, 00:06:44.705 "mutual_chap": false, 00:06:44.705 "chap_group": 0, 00:06:44.705 "max_large_datain_per_connection": 64, 00:06:44.705 "max_r2t_per_connection": 4, 00:06:44.705 "pdu_pool_size": 36864, 00:06:44.705 "immediate_data_pool_size": 16384, 00:06:44.705 "data_out_pool_size": 2048 00:06:44.705 } 00:06:44.705 } 00:06:44.705 ] 00:06:44.705 } 00:06:44.705 ] 00:06:44.705 } 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57196 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57196 ']' 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57196 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57196 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.705 killing process with pid 57196 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57196' 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57196 00:06:44.705 03:56:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57196 00:06:45.270 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57229 00:06:45.270 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:45.270 03:56:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57229 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57229 ']' 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57229 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57229 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57229' 00:06:50.536 killing process with pid 57229 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57229 00:06:50.536 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57229 00:06:50.795 03:56:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:50.795 03:56:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:51.053 00:06:51.053 real 0m7.456s 00:06:51.053 user 0m7.030s 00:06:51.053 sys 0m0.891s 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.053 ************************************ 00:06:51.053 END TEST skip_rpc_with_json 00:06:51.053 ************************************ 00:06:51.053 03:56:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:51.053 03:56:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.053 03:56:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.053 03:56:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.053 ************************************ 00:06:51.053 START TEST skip_rpc_with_delay 00:06:51.053 ************************************ 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:51.053 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:51.054 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:51.054 [2024-12-09 03:56:32.876330] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:51.054 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:51.054 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.054 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.054 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.054 00:06:51.054 real 0m0.099s 00:06:51.054 user 0m0.060s 00:06:51.054 sys 0m0.038s 00:06:51.054 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.054 03:56:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:51.054 ************************************ 00:06:51.054 END TEST skip_rpc_with_delay 00:06:51.054 ************************************ 00:06:51.054 03:56:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:51.054 03:56:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:51.054 03:56:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:51.054 03:56:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.054 03:56:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.054 03:56:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.054 ************************************ 00:06:51.054 START TEST exit_on_failed_rpc_init 00:06:51.054 ************************************ 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57339 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57339 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57339 ']' 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.054 03:56:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:51.313 [2024-12-09 03:56:33.029468] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:06:51.313 [2024-12-09 03:56:33.029594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57339 ] 00:06:51.313 [2024-12-09 03:56:33.169996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.313 [2024-12-09 03:56:33.249704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.571 [2024-12-09 03:56:33.356787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:52.199 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:52.199 [2024-12-09 03:56:34.098122] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:06:52.199 [2024-12-09 03:56:34.098262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57357 ] 00:06:52.457 [2024-12-09 03:56:34.246673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.457 [2024-12-09 03:56:34.321189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.457 [2024-12-09 03:56:34.321272] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:52.457 [2024-12-09 03:56:34.321288] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:52.457 [2024-12-09 03:56:34.321297] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.457 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:52.457 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57339 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57339 ']' 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57339 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57339 00:06:52.716 killing process with pid 57339 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57339' 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57339 00:06:52.716 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57339 00:06:53.281 00:06:53.281 real 0m2.012s 00:06:53.281 user 0m2.222s 00:06:53.281 sys 0m0.535s 00:06:53.281 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.281 03:56:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 ************************************ 00:06:53.281 END TEST exit_on_failed_rpc_init 00:06:53.281 ************************************ 00:06:53.281 03:56:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:53.281 00:06:53.281 real 0m15.552s 00:06:53.281 user 0m14.574s 00:06:53.281 sys 0m2.090s 00:06:53.281 ************************************ 00:06:53.281 END TEST skip_rpc 00:06:53.281 ************************************ 00:06:53.281 03:56:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.281 03:56:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 03:56:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:53.281 03:56:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.281 03:56:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.281 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 ************************************ 00:06:53.281 START TEST rpc_client 00:06:53.281 ************************************ 00:06:53.281 03:56:35 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:53.281 * Looking for test storage... 00:06:53.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:53.281 03:56:35 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.281 03:56:35 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.281 03:56:35 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.539 03:56:35 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.539 03:56:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:53.539 03:56:35 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.539 03:56:35 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.539 --rc genhtml_branch_coverage=1 00:06:53.539 --rc genhtml_function_coverage=1 00:06:53.539 --rc genhtml_legend=1 00:06:53.539 --rc geninfo_all_blocks=1 00:06:53.539 --rc geninfo_unexecuted_blocks=1 00:06:53.539 00:06:53.539 ' 00:06:53.539 03:56:35 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.539 --rc genhtml_branch_coverage=1 00:06:53.539 --rc genhtml_function_coverage=1 00:06:53.539 --rc genhtml_legend=1 00:06:53.539 --rc geninfo_all_blocks=1 00:06:53.539 --rc geninfo_unexecuted_blocks=1 00:06:53.539 00:06:53.539 ' 00:06:53.539 03:56:35 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.539 --rc genhtml_branch_coverage=1 00:06:53.539 --rc genhtml_function_coverage=1 00:06:53.539 --rc genhtml_legend=1 00:06:53.539 --rc geninfo_all_blocks=1 00:06:53.539 --rc geninfo_unexecuted_blocks=1 00:06:53.539 00:06:53.539 ' 00:06:53.539 03:56:35 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.539 --rc genhtml_branch_coverage=1 00:06:53.539 --rc genhtml_function_coverage=1 00:06:53.539 --rc genhtml_legend=1 00:06:53.539 --rc geninfo_all_blocks=1 00:06:53.539 --rc geninfo_unexecuted_blocks=1 00:06:53.539 00:06:53.539 ' 00:06:53.539 03:56:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:53.539 OK 00:06:53.539 03:56:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:53.539 00:06:53.539 real 0m0.224s 00:06:53.539 user 0m0.150s 00:06:53.539 sys 0m0.084s 00:06:53.539 03:56:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.539 03:56:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:53.539 ************************************ 00:06:53.539 END TEST rpc_client 00:06:53.539 ************************************ 00:06:53.539 03:56:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:53.539 03:56:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.539 03:56:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.539 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:53.539 ************************************ 00:06:53.539 START TEST json_config 00:06:53.539 ************************************ 00:06:53.539 03:56:35 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:53.539 03:56:35 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.539 03:56:35 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.539 03:56:35 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.539 03:56:35 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.539 03:56:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.539 03:56:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.539 03:56:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.539 03:56:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.539 03:56:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.539 03:56:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.539 03:56:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.539 03:56:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.539 03:56:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.539 03:56:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.539 03:56:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.539 03:56:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:53.539 03:56:35 json_config -- scripts/common.sh@345 -- # : 1 00:06:53.539 03:56:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.539 03:56:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.539 03:56:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:53.796 03:56:35 json_config -- scripts/common.sh@353 -- # local d=1 00:06:53.796 03:56:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.796 03:56:35 json_config -- scripts/common.sh@355 -- # echo 1 00:06:53.796 03:56:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.796 03:56:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:53.796 03:56:35 json_config -- scripts/common.sh@353 -- # local d=2 00:06:53.796 03:56:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.796 03:56:35 json_config -- scripts/common.sh@355 -- # echo 2 00:06:53.796 03:56:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.796 03:56:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.796 03:56:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.796 03:56:35 json_config -- scripts/common.sh@368 -- # return 0 00:06:53.796 03:56:35 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.796 03:56:35 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.796 --rc genhtml_branch_coverage=1 00:06:53.796 --rc genhtml_function_coverage=1 00:06:53.796 --rc genhtml_legend=1 00:06:53.796 --rc geninfo_all_blocks=1 00:06:53.796 --rc geninfo_unexecuted_blocks=1 00:06:53.796 00:06:53.796 ' 00:06:53.796 03:56:35 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.796 --rc genhtml_branch_coverage=1 00:06:53.796 --rc genhtml_function_coverage=1 00:06:53.796 --rc genhtml_legend=1 00:06:53.796 --rc geninfo_all_blocks=1 00:06:53.796 --rc geninfo_unexecuted_blocks=1 00:06:53.796 00:06:53.796 ' 00:06:53.796 03:56:35 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.796 --rc genhtml_branch_coverage=1 00:06:53.796 --rc genhtml_function_coverage=1 00:06:53.796 --rc genhtml_legend=1 00:06:53.796 --rc geninfo_all_blocks=1 00:06:53.796 --rc geninfo_unexecuted_blocks=1 00:06:53.796 00:06:53.796 ' 00:06:53.796 03:56:35 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.797 --rc genhtml_branch_coverage=1 00:06:53.797 --rc genhtml_function_coverage=1 00:06:53.797 --rc genhtml_legend=1 00:06:53.797 --rc geninfo_all_blocks=1 00:06:53.797 --rc geninfo_unexecuted_blocks=1 00:06:53.797 00:06:53.797 ' 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.797 03:56:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.797 03:56:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.797 03:56:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.797 03:56:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.797 03:56:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.797 03:56:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.797 03:56:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.797 03:56:35 json_config -- paths/export.sh@5 -- # export PATH 00:06:53.797 03:56:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@51 -- # : 0 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.797 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.797 03:56:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:53.797 INFO: JSON configuration test init 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.797 Waiting for target to run... 00:06:53.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.797 03:56:35 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:53.797 03:56:35 json_config -- json_config/common.sh@9 -- # local app=target 00:06:53.797 03:56:35 json_config -- json_config/common.sh@10 -- # shift 00:06:53.797 03:56:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.797 03:56:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.797 03:56:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.797 03:56:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.797 03:56:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.797 03:56:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57496 00:06:53.797 03:56:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.797 03:56:35 json_config -- json_config/common.sh@25 -- # waitforlisten 57496 /var/tmp/spdk_tgt.sock 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 57496 ']' 00:06:53.797 03:56:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.797 03:56:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.797 [2024-12-09 03:56:35.618617] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:06:53.797 [2024-12-09 03:56:35.619007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57496 ] 00:06:54.363 [2024-12-09 03:56:36.176993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.363 [2024-12-09 03:56:36.238968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.928 03:56:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.928 03:56:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:54.928 03:56:36 json_config -- json_config/common.sh@26 -- # echo '' 00:06:54.928 00:06:54.928 03:56:36 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:54.928 03:56:36 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:54.928 03:56:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.928 03:56:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.928 03:56:36 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:54.928 03:56:36 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:54.928 03:56:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.928 03:56:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.928 03:56:36 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:54.928 03:56:36 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:54.928 03:56:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:55.186 [2024-12-09 03:56:36.962393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:55.445 03:56:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.445 03:56:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:55.445 03:56:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:55.445 03:56:37 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@54 -- # sort 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:55.704 03:56:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.704 03:56:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:55.704 03:56:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.704 03:56:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:55.704 03:56:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:55.704 03:56:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:55.961 MallocForNvmf0 00:06:55.961 03:56:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:55.961 03:56:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:56.219 MallocForNvmf1 00:06:56.219 03:56:38 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:56.219 03:56:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:56.477 [2024-12-09 03:56:38.418993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.736 03:56:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.736 03:56:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.995 03:56:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:56.995 03:56:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:57.254 03:56:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:57.254 03:56:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:57.512 03:56:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:57.512 03:56:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:57.771 [2024-12-09 03:56:39.483772] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:57.771 03:56:39 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:57.771 03:56:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.771 03:56:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.771 03:56:39 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:57.771 03:56:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.771 03:56:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:57.771 03:56:39 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:57.771 03:56:39 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:57.771 03:56:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:58.030 MallocBdevForConfigChangeCheck 00:06:58.030 03:56:39 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:58.030 03:56:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.030 03:56:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.030 03:56:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:58.030 03:56:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.596 INFO: shutting down applications... 00:06:58.596 03:56:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:58.596 03:56:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:58.596 03:56:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:58.596 03:56:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:58.596 03:56:40 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:58.855 Calling clear_iscsi_subsystem 00:06:58.855 Calling clear_nvmf_subsystem 00:06:58.855 Calling clear_nbd_subsystem 00:06:58.855 Calling clear_ublk_subsystem 00:06:58.855 Calling clear_vhost_blk_subsystem 00:06:58.855 Calling clear_vhost_scsi_subsystem 00:06:58.855 Calling clear_bdev_subsystem 00:06:58.855 03:56:40 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:58.855 03:56:40 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:58.855 03:56:40 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:58.855 03:56:40 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.855 03:56:40 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:58.855 03:56:40 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:59.421 03:56:41 json_config -- json_config/json_config.sh@352 -- # break 00:06:59.421 03:56:41 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:59.421 03:56:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:59.421 03:56:41 json_config -- json_config/common.sh@31 -- # local app=target 00:06:59.421 03:56:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:59.421 03:56:41 json_config -- json_config/common.sh@35 -- # [[ -n 57496 ]] 00:06:59.421 03:56:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57496 00:06:59.421 03:56:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:59.421 03:56:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:59.421 03:56:41 json_config -- json_config/common.sh@41 -- # kill -0 57496 00:06:59.421 03:56:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:59.988 03:56:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:59.988 03:56:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:59.988 03:56:41 json_config -- json_config/common.sh@41 -- # kill -0 57496 00:06:59.988 03:56:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:59.988 03:56:41 json_config -- json_config/common.sh@43 -- # break 00:06:59.988 03:56:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:59.988 SPDK target shutdown done 00:06:59.988 INFO: relaunching applications... 00:06:59.988 03:56:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:59.988 03:56:41 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:59.988 03:56:41 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:59.988 03:56:41 json_config -- json_config/common.sh@9 -- # local app=target 00:06:59.988 03:56:41 json_config -- json_config/common.sh@10 -- # shift 00:06:59.988 03:56:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:59.988 03:56:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:59.988 03:56:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:59.988 03:56:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:59.988 03:56:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:59.988 Waiting for target to run... 00:06:59.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:59.988 03:56:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57697 00:06:59.988 03:56:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:59.988 03:56:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:59.988 03:56:41 json_config -- json_config/common.sh@25 -- # waitforlisten 57697 /var/tmp/spdk_tgt.sock 00:06:59.988 03:56:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 57697 ']' 00:06:59.988 03:56:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:59.988 03:56:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.988 03:56:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:59.988 03:56:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.988 03:56:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.988 [2024-12-09 03:56:41.783271] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:06:59.988 [2024-12-09 03:56:41.783687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57697 ] 00:07:00.553 [2024-12-09 03:56:42.326333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.553 [2024-12-09 03:56:42.386805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.810 [2024-12-09 03:56:42.528647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.810 [2024-12-09 03:56:42.757565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.069 [2024-12-09 03:56:42.789636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:01.069 00:07:01.069 INFO: Checking if target configuration is the same... 00:07:01.069 03:56:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.069 03:56:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:01.069 03:56:42 json_config -- json_config/common.sh@26 -- # echo '' 00:07:01.069 03:56:42 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:01.069 03:56:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:01.069 03:56:42 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.069 03:56:42 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:01.069 03:56:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:01.069 + '[' 2 -ne 2 ']' 00:07:01.069 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:01.069 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:01.069 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:01.069 +++ basename /dev/fd/62 00:07:01.069 ++ mktemp /tmp/62.XXX 00:07:01.069 + tmp_file_1=/tmp/62.04I 00:07:01.069 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.069 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:01.069 + tmp_file_2=/tmp/spdk_tgt_config.json.PbM 00:07:01.069 + ret=0 00:07:01.069 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:01.636 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:01.636 + diff -u /tmp/62.04I /tmp/spdk_tgt_config.json.PbM 00:07:01.636 INFO: JSON config files are the same 00:07:01.636 + echo 'INFO: JSON config files are the same' 00:07:01.636 + rm /tmp/62.04I /tmp/spdk_tgt_config.json.PbM 00:07:01.636 + exit 0 00:07:01.636 INFO: changing configuration and checking if this can be detected... 00:07:01.636 03:56:43 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:01.636 03:56:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:01.636 03:56:43 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:01.636 03:56:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:01.895 03:56:43 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.895 03:56:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:01.895 03:56:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:01.895 + '[' 2 -ne 2 ']' 00:07:01.895 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:01.895 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:01.895 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:01.895 +++ basename /dev/fd/62 00:07:01.895 ++ mktemp /tmp/62.XXX 00:07:01.895 + tmp_file_1=/tmp/62.D2v 00:07:01.895 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.895 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:01.895 + tmp_file_2=/tmp/spdk_tgt_config.json.qxk 00:07:01.895 + ret=0 00:07:01.895 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:02.462 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:02.462 + diff -u /tmp/62.D2v /tmp/spdk_tgt_config.json.qxk 00:07:02.462 + ret=1 00:07:02.462 + echo '=== Start of file: /tmp/62.D2v ===' 00:07:02.462 + cat /tmp/62.D2v 00:07:02.462 + echo '=== End of file: /tmp/62.D2v ===' 00:07:02.462 + echo '' 00:07:02.462 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qxk ===' 00:07:02.462 + cat /tmp/spdk_tgt_config.json.qxk 00:07:02.462 + echo '=== End of file: /tmp/spdk_tgt_config.json.qxk ===' 00:07:02.462 + echo '' 00:07:02.462 + rm /tmp/62.D2v /tmp/spdk_tgt_config.json.qxk 00:07:02.462 + exit 1 00:07:02.462 INFO: configuration change detected. 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@324 -- # [[ -n 57697 ]] 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.462 03:56:44 json_config -- json_config/json_config.sh@330 -- # killprocess 57697 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@954 -- # '[' -z 57697 ']' 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@958 -- # kill -0 57697 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@959 -- # uname 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57697 00:07:02.462 killing process with pid 57697 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57697' 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@973 -- # kill 57697 00:07:02.462 03:56:44 json_config -- common/autotest_common.sh@978 -- # wait 57697 00:07:02.721 03:56:44 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:02.721 03:56:44 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:02.721 03:56:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:02.721 03:56:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.979 03:56:44 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:02.979 INFO: Success 00:07:02.979 03:56:44 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:02.979 ************************************ 00:07:02.979 END TEST json_config 00:07:02.979 ************************************ 00:07:02.979 00:07:02.979 real 0m9.358s 00:07:02.979 user 0m13.345s 00:07:02.979 sys 0m2.136s 00:07:02.979 03:56:44 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.979 03:56:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.979 03:56:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:02.979 03:56:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.979 03:56:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.979 03:56:44 -- common/autotest_common.sh@10 -- # set +x 00:07:02.979 ************************************ 00:07:02.979 START TEST json_config_extra_key 00:07:02.979 ************************************ 00:07:02.979 03:56:44 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.980 03:56:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:02.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.980 --rc genhtml_branch_coverage=1 00:07:02.980 --rc genhtml_function_coverage=1 00:07:02.980 --rc genhtml_legend=1 00:07:02.980 --rc geninfo_all_blocks=1 00:07:02.980 --rc geninfo_unexecuted_blocks=1 00:07:02.980 00:07:02.980 ' 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:02.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.980 --rc genhtml_branch_coverage=1 00:07:02.980 --rc genhtml_function_coverage=1 00:07:02.980 --rc genhtml_legend=1 00:07:02.980 --rc geninfo_all_blocks=1 00:07:02.980 --rc geninfo_unexecuted_blocks=1 00:07:02.980 00:07:02.980 ' 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:02.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.980 --rc genhtml_branch_coverage=1 00:07:02.980 --rc genhtml_function_coverage=1 00:07:02.980 --rc genhtml_legend=1 00:07:02.980 --rc geninfo_all_blocks=1 00:07:02.980 --rc geninfo_unexecuted_blocks=1 00:07:02.980 00:07:02.980 ' 00:07:02.980 03:56:44 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:02.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.980 --rc genhtml_branch_coverage=1 00:07:02.980 --rc genhtml_function_coverage=1 00:07:02.980 --rc genhtml_legend=1 00:07:02.980 --rc geninfo_all_blocks=1 00:07:02.980 --rc geninfo_unexecuted_blocks=1 00:07:02.980 00:07:02.980 ' 00:07:02.980 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.980 03:56:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.239 03:56:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.239 03:56:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.239 03:56:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.239 03:56:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.239 03:56:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.239 03:56:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.239 03:56:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.239 03:56:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:03.239 03:56:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.239 03:56:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:03.240 INFO: launching applications... 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:03.240 03:56:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57851 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:03.240 Waiting for target to run... 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:03.240 03:56:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57851 /var/tmp/spdk_tgt.sock 00:07:03.240 03:56:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57851 ']' 00:07:03.240 03:56:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:03.240 03:56:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.240 03:56:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:03.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:03.240 03:56:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.240 03:56:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:03.240 [2024-12-09 03:56:45.024478] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:03.240 [2024-12-09 03:56:45.024848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57851 ] 00:07:03.806 [2024-12-09 03:56:45.577630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.806 [2024-12-09 03:56:45.640914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.806 [2024-12-09 03:56:45.676706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.383 00:07:04.383 INFO: shutting down applications... 00:07:04.383 03:56:46 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.383 03:56:46 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:04.383 03:56:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:04.383 03:56:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57851 ]] 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57851 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57851 00:07:04.383 03:56:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:04.654 03:56:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:04.654 03:56:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.654 03:56:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57851 00:07:04.654 03:56:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:05.221 03:56:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:05.221 03:56:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.221 03:56:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57851 00:07:05.222 03:56:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:05.222 03:56:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:05.222 03:56:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:05.222 SPDK target shutdown done 00:07:05.222 Success 00:07:05.222 03:56:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:05.222 03:56:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:05.222 00:07:05.222 real 0m2.332s 00:07:05.222 user 0m1.797s 00:07:05.222 sys 0m0.605s 00:07:05.222 ************************************ 00:07:05.222 END TEST json_config_extra_key 00:07:05.222 ************************************ 00:07:05.222 03:56:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.222 03:56:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:05.222 03:56:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:05.222 03:56:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.222 03:56:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.222 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:07:05.222 ************************************ 00:07:05.222 START TEST alias_rpc 00:07:05.222 ************************************ 00:07:05.222 03:56:47 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:05.481 * Looking for test storage... 00:07:05.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.481 03:56:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.481 --rc genhtml_branch_coverage=1 00:07:05.481 --rc genhtml_function_coverage=1 00:07:05.481 --rc genhtml_legend=1 00:07:05.481 --rc geninfo_all_blocks=1 00:07:05.481 --rc geninfo_unexecuted_blocks=1 00:07:05.481 00:07:05.481 ' 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.481 --rc genhtml_branch_coverage=1 00:07:05.481 --rc genhtml_function_coverage=1 00:07:05.481 --rc genhtml_legend=1 00:07:05.481 --rc geninfo_all_blocks=1 00:07:05.481 --rc geninfo_unexecuted_blocks=1 00:07:05.481 00:07:05.481 ' 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.481 --rc genhtml_branch_coverage=1 00:07:05.481 --rc genhtml_function_coverage=1 00:07:05.481 --rc genhtml_legend=1 00:07:05.481 --rc geninfo_all_blocks=1 00:07:05.481 --rc geninfo_unexecuted_blocks=1 00:07:05.481 00:07:05.481 ' 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.481 --rc genhtml_branch_coverage=1 00:07:05.481 --rc genhtml_function_coverage=1 00:07:05.481 --rc genhtml_legend=1 00:07:05.481 --rc geninfo_all_blocks=1 00:07:05.481 --rc geninfo_unexecuted_blocks=1 00:07:05.481 00:07:05.481 ' 00:07:05.481 03:56:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.481 03:56:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57936 00:07:05.481 03:56:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:05.481 03:56:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57936 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57936 ']' 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.481 03:56:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.482 [2024-12-09 03:56:47.401262] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:05.482 [2024-12-09 03:56:47.401379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57936 ] 00:07:05.740 [2024-12-09 03:56:47.550871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.740 [2024-12-09 03:56:47.639588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.999 [2024-12-09 03:56:47.747581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.568 03:56:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.568 03:56:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:06.568 03:56:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:07.136 03:56:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57936 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57936 ']' 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57936 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57936 00:07:07.136 killing process with pid 57936 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57936' 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 57936 00:07:07.136 03:56:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 57936 00:07:07.704 ************************************ 00:07:07.704 END TEST alias_rpc 00:07:07.704 ************************************ 00:07:07.704 00:07:07.704 real 0m2.276s 00:07:07.704 user 0m2.555s 00:07:07.704 sys 0m0.579s 00:07:07.704 03:56:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.704 03:56:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.704 03:56:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:07.704 03:56:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:07.704 03:56:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.704 03:56:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.704 03:56:49 -- common/autotest_common.sh@10 -- # set +x 00:07:07.704 ************************************ 00:07:07.704 START TEST spdkcli_tcp 00:07:07.704 ************************************ 00:07:07.704 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:07.704 * Looking for test storage... 00:07:07.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:07.704 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:07.704 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:07.704 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:07.704 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:07.704 03:56:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.704 03:56:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.704 03:56:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.705 03:56:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:07.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.705 --rc genhtml_branch_coverage=1 00:07:07.705 --rc genhtml_function_coverage=1 00:07:07.705 --rc genhtml_legend=1 00:07:07.705 --rc geninfo_all_blocks=1 00:07:07.705 --rc geninfo_unexecuted_blocks=1 00:07:07.705 00:07:07.705 ' 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:07.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.705 --rc genhtml_branch_coverage=1 00:07:07.705 --rc genhtml_function_coverage=1 00:07:07.705 --rc genhtml_legend=1 00:07:07.705 --rc geninfo_all_blocks=1 00:07:07.705 --rc geninfo_unexecuted_blocks=1 00:07:07.705 00:07:07.705 ' 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:07.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.705 --rc genhtml_branch_coverage=1 00:07:07.705 --rc genhtml_function_coverage=1 00:07:07.705 --rc genhtml_legend=1 00:07:07.705 --rc geninfo_all_blocks=1 00:07:07.705 --rc geninfo_unexecuted_blocks=1 00:07:07.705 00:07:07.705 ' 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:07.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.705 --rc genhtml_branch_coverage=1 00:07:07.705 --rc genhtml_function_coverage=1 00:07:07.705 --rc genhtml_legend=1 00:07:07.705 --rc geninfo_all_blocks=1 00:07:07.705 --rc geninfo_unexecuted_blocks=1 00:07:07.705 00:07:07.705 ' 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58020 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:07.705 03:56:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58020 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58020 ']' 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.705 03:56:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.962 [2024-12-09 03:56:49.697168] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:07.962 [2024-12-09 03:56:49.697662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58020 ] 00:07:07.962 [2024-12-09 03:56:49.833966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.962 [2024-12-09 03:56:49.903225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.962 [2024-12-09 03:56:49.903251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.220 [2024-12-09 03:56:49.999516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.786 03:56:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.786 03:56:50 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:09.046 03:56:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58037 00:07:09.046 03:56:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:09.046 03:56:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:09.046 [ 00:07:09.046 "bdev_malloc_delete", 00:07:09.046 "bdev_malloc_create", 00:07:09.046 "bdev_null_resize", 00:07:09.046 "bdev_null_delete", 00:07:09.046 "bdev_null_create", 00:07:09.046 "bdev_nvme_cuse_unregister", 00:07:09.046 "bdev_nvme_cuse_register", 00:07:09.046 "bdev_opal_new_user", 00:07:09.046 "bdev_opal_set_lock_state", 00:07:09.046 "bdev_opal_delete", 00:07:09.046 "bdev_opal_get_info", 00:07:09.046 "bdev_opal_create", 00:07:09.046 "bdev_nvme_opal_revert", 00:07:09.046 "bdev_nvme_opal_init", 00:07:09.046 "bdev_nvme_send_cmd", 00:07:09.046 "bdev_nvme_set_keys", 00:07:09.046 "bdev_nvme_get_path_iostat", 00:07:09.046 "bdev_nvme_get_mdns_discovery_info", 00:07:09.046 "bdev_nvme_stop_mdns_discovery", 00:07:09.046 "bdev_nvme_start_mdns_discovery", 00:07:09.046 "bdev_nvme_set_multipath_policy", 00:07:09.046 "bdev_nvme_set_preferred_path", 00:07:09.046 "bdev_nvme_get_io_paths", 00:07:09.046 "bdev_nvme_remove_error_injection", 00:07:09.046 "bdev_nvme_add_error_injection", 00:07:09.046 "bdev_nvme_get_discovery_info", 00:07:09.046 "bdev_nvme_stop_discovery", 00:07:09.046 "bdev_nvme_start_discovery", 00:07:09.046 "bdev_nvme_get_controller_health_info", 00:07:09.046 "bdev_nvme_disable_controller", 00:07:09.046 "bdev_nvme_enable_controller", 00:07:09.046 "bdev_nvme_reset_controller", 00:07:09.046 "bdev_nvme_get_transport_statistics", 00:07:09.046 "bdev_nvme_apply_firmware", 00:07:09.046 "bdev_nvme_detach_controller", 00:07:09.046 "bdev_nvme_get_controllers", 00:07:09.046 "bdev_nvme_attach_controller", 00:07:09.046 "bdev_nvme_set_hotplug", 00:07:09.046 "bdev_nvme_set_options", 00:07:09.046 "bdev_passthru_delete", 00:07:09.046 "bdev_passthru_create", 00:07:09.046 "bdev_lvol_set_parent_bdev", 00:07:09.046 "bdev_lvol_set_parent", 00:07:09.046 "bdev_lvol_check_shallow_copy", 00:07:09.046 "bdev_lvol_start_shallow_copy", 00:07:09.046 "bdev_lvol_grow_lvstore", 00:07:09.046 "bdev_lvol_get_lvols", 00:07:09.046 "bdev_lvol_get_lvstores", 00:07:09.046 "bdev_lvol_delete", 00:07:09.046 "bdev_lvol_set_read_only", 00:07:09.046 "bdev_lvol_resize", 00:07:09.046 "bdev_lvol_decouple_parent", 00:07:09.046 "bdev_lvol_inflate", 00:07:09.046 "bdev_lvol_rename", 00:07:09.046 "bdev_lvol_clone_bdev", 00:07:09.046 "bdev_lvol_clone", 00:07:09.046 "bdev_lvol_snapshot", 00:07:09.046 "bdev_lvol_create", 00:07:09.046 "bdev_lvol_delete_lvstore", 00:07:09.046 "bdev_lvol_rename_lvstore", 00:07:09.046 "bdev_lvol_create_lvstore", 00:07:09.046 "bdev_raid_set_options", 00:07:09.046 "bdev_raid_remove_base_bdev", 00:07:09.046 "bdev_raid_add_base_bdev", 00:07:09.046 "bdev_raid_delete", 00:07:09.046 "bdev_raid_create", 00:07:09.046 "bdev_raid_get_bdevs", 00:07:09.046 "bdev_error_inject_error", 00:07:09.046 "bdev_error_delete", 00:07:09.046 "bdev_error_create", 00:07:09.046 "bdev_split_delete", 00:07:09.046 "bdev_split_create", 00:07:09.046 "bdev_delay_delete", 00:07:09.046 "bdev_delay_create", 00:07:09.046 "bdev_delay_update_latency", 00:07:09.046 "bdev_zone_block_delete", 00:07:09.046 "bdev_zone_block_create", 00:07:09.046 "blobfs_create", 00:07:09.046 "blobfs_detect", 00:07:09.046 "blobfs_set_cache_size", 00:07:09.046 "bdev_aio_delete", 00:07:09.046 "bdev_aio_rescan", 00:07:09.046 "bdev_aio_create", 00:07:09.046 "bdev_ftl_set_property", 00:07:09.046 "bdev_ftl_get_properties", 00:07:09.046 "bdev_ftl_get_stats", 00:07:09.046 "bdev_ftl_unmap", 00:07:09.046 "bdev_ftl_unload", 00:07:09.046 "bdev_ftl_delete", 00:07:09.046 "bdev_ftl_load", 00:07:09.046 "bdev_ftl_create", 00:07:09.046 "bdev_virtio_attach_controller", 00:07:09.046 "bdev_virtio_scsi_get_devices", 00:07:09.046 "bdev_virtio_detach_controller", 00:07:09.046 "bdev_virtio_blk_set_hotplug", 00:07:09.046 "bdev_iscsi_delete", 00:07:09.046 "bdev_iscsi_create", 00:07:09.046 "bdev_iscsi_set_options", 00:07:09.046 "bdev_uring_delete", 00:07:09.046 "bdev_uring_rescan", 00:07:09.046 "bdev_uring_create", 00:07:09.046 "accel_error_inject_error", 00:07:09.046 "ioat_scan_accel_module", 00:07:09.046 "dsa_scan_accel_module", 00:07:09.046 "iaa_scan_accel_module", 00:07:09.046 "keyring_file_remove_key", 00:07:09.046 "keyring_file_add_key", 00:07:09.046 "keyring_linux_set_options", 00:07:09.046 "fsdev_aio_delete", 00:07:09.046 "fsdev_aio_create", 00:07:09.046 "iscsi_get_histogram", 00:07:09.046 "iscsi_enable_histogram", 00:07:09.046 "iscsi_set_options", 00:07:09.046 "iscsi_get_auth_groups", 00:07:09.046 "iscsi_auth_group_remove_secret", 00:07:09.046 "iscsi_auth_group_add_secret", 00:07:09.046 "iscsi_delete_auth_group", 00:07:09.046 "iscsi_create_auth_group", 00:07:09.046 "iscsi_set_discovery_auth", 00:07:09.046 "iscsi_get_options", 00:07:09.046 "iscsi_target_node_request_logout", 00:07:09.046 "iscsi_target_node_set_redirect", 00:07:09.046 "iscsi_target_node_set_auth", 00:07:09.046 "iscsi_target_node_add_lun", 00:07:09.046 "iscsi_get_stats", 00:07:09.046 "iscsi_get_connections", 00:07:09.046 "iscsi_portal_group_set_auth", 00:07:09.046 "iscsi_start_portal_group", 00:07:09.046 "iscsi_delete_portal_group", 00:07:09.046 "iscsi_create_portal_group", 00:07:09.046 "iscsi_get_portal_groups", 00:07:09.046 "iscsi_delete_target_node", 00:07:09.046 "iscsi_target_node_remove_pg_ig_maps", 00:07:09.046 "iscsi_target_node_add_pg_ig_maps", 00:07:09.046 "iscsi_create_target_node", 00:07:09.046 "iscsi_get_target_nodes", 00:07:09.046 "iscsi_delete_initiator_group", 00:07:09.046 "iscsi_initiator_group_remove_initiators", 00:07:09.046 "iscsi_initiator_group_add_initiators", 00:07:09.046 "iscsi_create_initiator_group", 00:07:09.046 "iscsi_get_initiator_groups", 00:07:09.046 "nvmf_set_crdt", 00:07:09.046 "nvmf_set_config", 00:07:09.046 "nvmf_set_max_subsystems", 00:07:09.046 "nvmf_stop_mdns_prr", 00:07:09.046 "nvmf_publish_mdns_prr", 00:07:09.046 "nvmf_subsystem_get_listeners", 00:07:09.046 "nvmf_subsystem_get_qpairs", 00:07:09.046 "nvmf_subsystem_get_controllers", 00:07:09.046 "nvmf_get_stats", 00:07:09.046 "nvmf_get_transports", 00:07:09.046 "nvmf_create_transport", 00:07:09.046 "nvmf_get_targets", 00:07:09.046 "nvmf_delete_target", 00:07:09.046 "nvmf_create_target", 00:07:09.046 "nvmf_subsystem_allow_any_host", 00:07:09.046 "nvmf_subsystem_set_keys", 00:07:09.046 "nvmf_subsystem_remove_host", 00:07:09.046 "nvmf_subsystem_add_host", 00:07:09.046 "nvmf_ns_remove_host", 00:07:09.046 "nvmf_ns_add_host", 00:07:09.046 "nvmf_subsystem_remove_ns", 00:07:09.046 "nvmf_subsystem_set_ns_ana_group", 00:07:09.046 "nvmf_subsystem_add_ns", 00:07:09.046 "nvmf_subsystem_listener_set_ana_state", 00:07:09.046 "nvmf_discovery_get_referrals", 00:07:09.046 "nvmf_discovery_remove_referral", 00:07:09.046 "nvmf_discovery_add_referral", 00:07:09.046 "nvmf_subsystem_remove_listener", 00:07:09.046 "nvmf_subsystem_add_listener", 00:07:09.046 "nvmf_delete_subsystem", 00:07:09.046 "nvmf_create_subsystem", 00:07:09.046 "nvmf_get_subsystems", 00:07:09.047 "env_dpdk_get_mem_stats", 00:07:09.047 "nbd_get_disks", 00:07:09.047 "nbd_stop_disk", 00:07:09.047 "nbd_start_disk", 00:07:09.047 "ublk_recover_disk", 00:07:09.047 "ublk_get_disks", 00:07:09.047 "ublk_stop_disk", 00:07:09.047 "ublk_start_disk", 00:07:09.047 "ublk_destroy_target", 00:07:09.047 "ublk_create_target", 00:07:09.047 "virtio_blk_create_transport", 00:07:09.047 "virtio_blk_get_transports", 00:07:09.047 "vhost_controller_set_coalescing", 00:07:09.047 "vhost_get_controllers", 00:07:09.047 "vhost_delete_controller", 00:07:09.047 "vhost_create_blk_controller", 00:07:09.047 "vhost_scsi_controller_remove_target", 00:07:09.047 "vhost_scsi_controller_add_target", 00:07:09.047 "vhost_start_scsi_controller", 00:07:09.047 "vhost_create_scsi_controller", 00:07:09.047 "thread_set_cpumask", 00:07:09.047 "scheduler_set_options", 00:07:09.047 "framework_get_governor", 00:07:09.047 "framework_get_scheduler", 00:07:09.047 "framework_set_scheduler", 00:07:09.047 "framework_get_reactors", 00:07:09.047 "thread_get_io_channels", 00:07:09.047 "thread_get_pollers", 00:07:09.047 "thread_get_stats", 00:07:09.047 "framework_monitor_context_switch", 00:07:09.047 "spdk_kill_instance", 00:07:09.047 "log_enable_timestamps", 00:07:09.047 "log_get_flags", 00:07:09.047 "log_clear_flag", 00:07:09.047 "log_set_flag", 00:07:09.047 "log_get_level", 00:07:09.047 "log_set_level", 00:07:09.047 "log_get_print_level", 00:07:09.047 "log_set_print_level", 00:07:09.047 "framework_enable_cpumask_locks", 00:07:09.047 "framework_disable_cpumask_locks", 00:07:09.047 "framework_wait_init", 00:07:09.047 "framework_start_init", 00:07:09.047 "scsi_get_devices", 00:07:09.047 "bdev_get_histogram", 00:07:09.047 "bdev_enable_histogram", 00:07:09.047 "bdev_set_qos_limit", 00:07:09.047 "bdev_set_qd_sampling_period", 00:07:09.047 "bdev_get_bdevs", 00:07:09.047 "bdev_reset_iostat", 00:07:09.047 "bdev_get_iostat", 00:07:09.047 "bdev_examine", 00:07:09.047 "bdev_wait_for_examine", 00:07:09.047 "bdev_set_options", 00:07:09.047 "accel_get_stats", 00:07:09.047 "accel_set_options", 00:07:09.047 "accel_set_driver", 00:07:09.047 "accel_crypto_key_destroy", 00:07:09.047 "accel_crypto_keys_get", 00:07:09.047 "accel_crypto_key_create", 00:07:09.047 "accel_assign_opc", 00:07:09.047 "accel_get_module_info", 00:07:09.047 "accel_get_opc_assignments", 00:07:09.047 "vmd_rescan", 00:07:09.047 "vmd_remove_device", 00:07:09.047 "vmd_enable", 00:07:09.047 "sock_get_default_impl", 00:07:09.047 "sock_set_default_impl", 00:07:09.047 "sock_impl_set_options", 00:07:09.047 "sock_impl_get_options", 00:07:09.047 "iobuf_get_stats", 00:07:09.047 "iobuf_set_options", 00:07:09.047 "keyring_get_keys", 00:07:09.047 "framework_get_pci_devices", 00:07:09.047 "framework_get_config", 00:07:09.047 "framework_get_subsystems", 00:07:09.047 "fsdev_set_opts", 00:07:09.047 "fsdev_get_opts", 00:07:09.047 "trace_get_info", 00:07:09.047 "trace_get_tpoint_group_mask", 00:07:09.047 "trace_disable_tpoint_group", 00:07:09.047 "trace_enable_tpoint_group", 00:07:09.047 "trace_clear_tpoint_mask", 00:07:09.047 "trace_set_tpoint_mask", 00:07:09.047 "notify_get_notifications", 00:07:09.047 "notify_get_types", 00:07:09.047 "spdk_get_version", 00:07:09.047 "rpc_get_methods" 00:07:09.047 ] 00:07:09.047 03:56:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:09.047 03:56:50 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.047 03:56:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.306 03:56:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:09.306 03:56:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58020 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58020 ']' 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58020 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58020 00:07:09.306 killing process with pid 58020 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58020' 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58020 00:07:09.306 03:56:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58020 00:07:09.872 ************************************ 00:07:09.872 END TEST spdkcli_tcp 00:07:09.872 ************************************ 00:07:09.872 00:07:09.872 real 0m2.142s 00:07:09.872 user 0m3.892s 00:07:09.872 sys 0m0.628s 00:07:09.872 03:56:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.872 03:56:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.872 03:56:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:09.872 03:56:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.872 03:56:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.872 03:56:51 -- common/autotest_common.sh@10 -- # set +x 00:07:09.872 ************************************ 00:07:09.872 START TEST dpdk_mem_utility 00:07:09.872 ************************************ 00:07:09.872 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:09.872 * Looking for test storage... 00:07:09.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:09.872 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.872 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.872 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.131 03:56:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.131 --rc genhtml_branch_coverage=1 00:07:10.131 --rc genhtml_function_coverage=1 00:07:10.131 --rc genhtml_legend=1 00:07:10.131 --rc geninfo_all_blocks=1 00:07:10.131 --rc geninfo_unexecuted_blocks=1 00:07:10.131 00:07:10.131 ' 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.131 --rc genhtml_branch_coverage=1 00:07:10.131 --rc genhtml_function_coverage=1 00:07:10.131 --rc genhtml_legend=1 00:07:10.131 --rc geninfo_all_blocks=1 00:07:10.131 --rc geninfo_unexecuted_blocks=1 00:07:10.131 00:07:10.131 ' 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.131 --rc genhtml_branch_coverage=1 00:07:10.131 --rc genhtml_function_coverage=1 00:07:10.131 --rc genhtml_legend=1 00:07:10.131 --rc geninfo_all_blocks=1 00:07:10.131 --rc geninfo_unexecuted_blocks=1 00:07:10.131 00:07:10.131 ' 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.131 --rc genhtml_branch_coverage=1 00:07:10.131 --rc genhtml_function_coverage=1 00:07:10.131 --rc genhtml_legend=1 00:07:10.131 --rc geninfo_all_blocks=1 00:07:10.131 --rc geninfo_unexecuted_blocks=1 00:07:10.131 00:07:10.131 ' 00:07:10.131 03:56:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:10.131 03:56:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58119 00:07:10.131 03:56:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:10.131 03:56:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58119 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58119 ']' 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.131 03:56:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:10.131 [2024-12-09 03:56:51.911364] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:10.131 [2024-12-09 03:56:51.911510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58119 ] 00:07:10.131 [2024-12-09 03:56:52.055347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.390 [2024-12-09 03:56:52.132748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.390 [2024-12-09 03:56:52.229831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.648 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.648 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:10.648 03:56:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:10.648 03:56:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:10.648 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.648 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:10.648 { 00:07:10.648 "filename": "/tmp/spdk_mem_dump.txt" 00:07:10.648 } 00:07:10.648 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.648 03:56:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:10.648 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:10.648 1 heaps totaling size 818.000000 MiB 00:07:10.648 size: 818.000000 MiB heap id: 0 00:07:10.648 end heaps---------- 00:07:10.648 9 mempools totaling size 603.782043 MiB 00:07:10.649 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:10.649 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:10.649 size: 100.555481 MiB name: bdev_io_58119 00:07:10.649 size: 50.003479 MiB name: msgpool_58119 00:07:10.649 size: 36.509338 MiB name: fsdev_io_58119 00:07:10.649 size: 21.763794 MiB name: PDU_Pool 00:07:10.649 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:10.649 size: 4.133484 MiB name: evtpool_58119 00:07:10.649 size: 0.026123 MiB name: Session_Pool 00:07:10.649 end mempools------- 00:07:10.649 6 memzones totaling size 4.142822 MiB 00:07:10.649 size: 1.000366 MiB name: RG_ring_0_58119 00:07:10.649 size: 1.000366 MiB name: RG_ring_1_58119 00:07:10.649 size: 1.000366 MiB name: RG_ring_4_58119 00:07:10.649 size: 1.000366 MiB name: RG_ring_5_58119 00:07:10.649 size: 0.125366 MiB name: RG_ring_2_58119 00:07:10.649 size: 0.015991 MiB name: RG_ring_3_58119 00:07:10.649 end memzones------- 00:07:10.649 03:56:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:10.908 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:07:10.908 list of free elements. size: 10.803223 MiB 00:07:10.908 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:10.908 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:10.908 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:10.908 element at address: 0x200000400000 with size: 0.993958 MiB 00:07:10.908 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:10.908 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:10.908 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:10.908 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:10.908 element at address: 0x20001ae00000 with size: 0.568237 MiB 00:07:10.908 element at address: 0x20000a600000 with size: 0.488892 MiB 00:07:10.908 element at address: 0x200000c00000 with size: 0.486267 MiB 00:07:10.908 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:10.908 element at address: 0x200003e00000 with size: 0.480286 MiB 00:07:10.908 element at address: 0x200028200000 with size: 0.395935 MiB 00:07:10.908 element at address: 0x200000800000 with size: 0.351746 MiB 00:07:10.908 list of standard malloc elements. size: 199.267883 MiB 00:07:10.908 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:10.908 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:10.908 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:10.908 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:10.908 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:10.908 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:10.908 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:10.908 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:10.908 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:10.908 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:07:10.908 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000085e580 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087e840 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087e900 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f080 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f140 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f200 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f380 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f440 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f500 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:10.909 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:10.909 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:10.910 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x200028265680 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c280 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c480 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c540 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c600 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c780 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c840 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c900 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d080 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d140 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d200 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d380 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d440 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d500 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d680 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d740 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d800 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826d980 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826da40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826db00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826de00 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:07:10.910 element at address: 0x20002826df80 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e040 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e100 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e280 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e340 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e400 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e580 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e640 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e700 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e880 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826e940 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f000 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f180 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f240 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f300 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f480 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f540 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f600 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f780 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f840 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f900 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:10.911 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:10.911 list of memzone associated elements. size: 607.928894 MiB 00:07:10.911 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:10.911 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:10.911 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:10.911 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:10.911 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:10.911 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58119_0 00:07:10.911 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:10.911 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58119_0 00:07:10.911 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:10.911 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58119_0 00:07:10.911 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:10.911 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:10.911 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:10.911 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:10.911 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:10.911 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58119_0 00:07:10.911 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:10.911 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58119 00:07:10.911 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:10.911 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58119 00:07:10.911 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:10.911 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:10.911 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:10.911 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:10.911 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:10.911 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:10.911 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:10.911 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:10.911 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:10.911 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58119 00:07:10.911 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:10.911 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58119 00:07:10.911 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:10.911 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58119 00:07:10.911 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:10.911 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58119 00:07:10.911 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:10.911 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58119 00:07:10.911 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:10.911 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58119 00:07:10.911 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:10.911 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:10.911 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:10.911 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:10.911 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:10.911 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:10.911 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:10.911 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58119 00:07:10.911 element at address: 0x20000085e640 with size: 0.125488 MiB 00:07:10.911 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58119 00:07:10.911 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:10.911 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:10.911 element at address: 0x200028265740 with size: 0.023743 MiB 00:07:10.911 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:10.911 element at address: 0x20000085a380 with size: 0.016113 MiB 00:07:10.911 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58119 00:07:10.911 element at address: 0x20002826b880 with size: 0.002441 MiB 00:07:10.911 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:10.911 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:07:10.911 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58119 00:07:10.911 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:10.911 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58119 00:07:10.911 element at address: 0x20000085a180 with size: 0.000305 MiB 00:07:10.911 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58119 00:07:10.911 element at address: 0x20002826c340 with size: 0.000305 MiB 00:07:10.911 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:10.911 03:56:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:10.911 03:56:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58119 00:07:10.911 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58119 ']' 00:07:10.911 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58119 00:07:10.911 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:10.912 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.912 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58119 00:07:10.912 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.912 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.912 killing process with pid 58119 00:07:10.912 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58119' 00:07:10.912 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58119 00:07:10.912 03:56:52 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58119 00:07:11.478 00:07:11.478 real 0m1.606s 00:07:11.478 user 0m1.461s 00:07:11.478 sys 0m0.520s 00:07:11.478 03:56:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.478 ************************************ 00:07:11.478 03:56:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:11.478 END TEST dpdk_mem_utility 00:07:11.478 ************************************ 00:07:11.478 03:56:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:11.478 03:56:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.478 03:56:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.478 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.478 ************************************ 00:07:11.478 START TEST event 00:07:11.478 ************************************ 00:07:11.478 03:56:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:11.478 * Looking for test storage... 00:07:11.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:11.478 03:56:53 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.478 03:56:53 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.478 03:56:53 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.736 03:56:53 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.736 03:56:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.736 03:56:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.736 03:56:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.736 03:56:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.736 03:56:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.736 03:56:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.736 03:56:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.736 03:56:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.736 03:56:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.736 03:56:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.736 03:56:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.736 03:56:53 event -- scripts/common.sh@344 -- # case "$op" in 00:07:11.736 03:56:53 event -- scripts/common.sh@345 -- # : 1 00:07:11.736 03:56:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.736 03:56:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.736 03:56:53 event -- scripts/common.sh@365 -- # decimal 1 00:07:11.736 03:56:53 event -- scripts/common.sh@353 -- # local d=1 00:07:11.736 03:56:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.736 03:56:53 event -- scripts/common.sh@355 -- # echo 1 00:07:11.736 03:56:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.736 03:56:53 event -- scripts/common.sh@366 -- # decimal 2 00:07:11.736 03:56:53 event -- scripts/common.sh@353 -- # local d=2 00:07:11.736 03:56:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.736 03:56:53 event -- scripts/common.sh@355 -- # echo 2 00:07:11.736 03:56:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.737 03:56:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.737 03:56:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.737 03:56:53 event -- scripts/common.sh@368 -- # return 0 00:07:11.737 03:56:53 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.737 03:56:53 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.737 --rc genhtml_branch_coverage=1 00:07:11.737 --rc genhtml_function_coverage=1 00:07:11.737 --rc genhtml_legend=1 00:07:11.737 --rc geninfo_all_blocks=1 00:07:11.737 --rc geninfo_unexecuted_blocks=1 00:07:11.737 00:07:11.737 ' 00:07:11.737 03:56:53 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.737 --rc genhtml_branch_coverage=1 00:07:11.737 --rc genhtml_function_coverage=1 00:07:11.737 --rc genhtml_legend=1 00:07:11.737 --rc geninfo_all_blocks=1 00:07:11.737 --rc geninfo_unexecuted_blocks=1 00:07:11.737 00:07:11.737 ' 00:07:11.737 03:56:53 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.737 --rc genhtml_branch_coverage=1 00:07:11.737 --rc genhtml_function_coverage=1 00:07:11.737 --rc genhtml_legend=1 00:07:11.737 --rc geninfo_all_blocks=1 00:07:11.737 --rc geninfo_unexecuted_blocks=1 00:07:11.737 00:07:11.737 ' 00:07:11.737 03:56:53 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.737 --rc genhtml_branch_coverage=1 00:07:11.737 --rc genhtml_function_coverage=1 00:07:11.737 --rc genhtml_legend=1 00:07:11.737 --rc geninfo_all_blocks=1 00:07:11.737 --rc geninfo_unexecuted_blocks=1 00:07:11.737 00:07:11.737 ' 00:07:11.737 03:56:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:11.737 03:56:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:11.737 03:56:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:11.737 03:56:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:11.737 03:56:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.737 03:56:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.737 ************************************ 00:07:11.737 START TEST event_perf 00:07:11.737 ************************************ 00:07:11.737 03:56:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:11.737 Running I/O for 1 seconds...[2024-12-09 03:56:53.530145] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:11.737 [2024-12-09 03:56:53.530289] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58202 ] 00:07:11.737 [2024-12-09 03:56:53.672372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.996 [2024-12-09 03:56:53.735035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.996 [2024-12-09 03:56:53.735209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.996 [2024-12-09 03:56:53.735333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.996 [2024-12-09 03:56:53.735338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.930 Running I/O for 1 seconds... 00:07:12.930 lcore 0: 200814 00:07:12.930 lcore 1: 200814 00:07:12.930 lcore 2: 200815 00:07:12.930 lcore 3: 200818 00:07:12.930 done. 00:07:12.930 00:07:12.930 real 0m1.285s 00:07:12.930 user 0m4.101s 00:07:12.930 sys 0m0.061s 00:07:12.930 03:56:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.930 ************************************ 00:07:12.930 END TEST event_perf 00:07:12.930 03:56:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.930 ************************************ 00:07:12.930 03:56:54 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:12.930 03:56:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:12.930 03:56:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.930 03:56:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.930 ************************************ 00:07:12.930 START TEST event_reactor 00:07:12.930 ************************************ 00:07:12.930 03:56:54 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:12.930 [2024-12-09 03:56:54.866503] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:12.930 [2024-12-09 03:56:54.866625] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58235 ] 00:07:13.190 [2024-12-09 03:56:55.005264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.190 [2024-12-09 03:56:55.075954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.577 test_start 00:07:14.577 oneshot 00:07:14.577 tick 100 00:07:14.577 tick 100 00:07:14.577 tick 250 00:07:14.577 tick 100 00:07:14.577 tick 100 00:07:14.577 tick 100 00:07:14.577 tick 250 00:07:14.577 tick 500 00:07:14.577 tick 100 00:07:14.577 tick 100 00:07:14.577 tick 250 00:07:14.577 tick 100 00:07:14.577 tick 100 00:07:14.577 test_end 00:07:14.577 00:07:14.577 real 0m1.291s 00:07:14.577 user 0m1.131s 00:07:14.577 sys 0m0.054s 00:07:14.577 03:56:56 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.577 03:56:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 ************************************ 00:07:14.577 END TEST event_reactor 00:07:14.577 ************************************ 00:07:14.577 03:56:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:14.577 03:56:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:14.577 03:56:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.577 03:56:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 ************************************ 00:07:14.577 START TEST event_reactor_perf 00:07:14.577 ************************************ 00:07:14.577 03:56:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:14.577 [2024-12-09 03:56:56.210071] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:14.577 [2024-12-09 03:56:56.210246] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58270 ] 00:07:14.577 [2024-12-09 03:56:56.355026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.577 [2024-12-09 03:56:56.434181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.954 test_start 00:07:15.954 test_end 00:07:15.954 Performance: 406253 events per second 00:07:15.954 00:07:15.954 real 0m1.314s 00:07:15.954 user 0m1.152s 00:07:15.954 sys 0m0.056s 00:07:15.954 03:56:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.954 03:56:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.954 ************************************ 00:07:15.954 END TEST event_reactor_perf 00:07:15.954 ************************************ 00:07:15.954 03:56:57 event -- event/event.sh@49 -- # uname -s 00:07:15.954 03:56:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:15.954 03:56:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:15.954 03:56:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.954 03:56:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.954 03:56:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.954 ************************************ 00:07:15.954 START TEST event_scheduler 00:07:15.954 ************************************ 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:15.954 * Looking for test storage... 00:07:15.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.954 03:56:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.954 --rc genhtml_branch_coverage=1 00:07:15.954 --rc genhtml_function_coverage=1 00:07:15.954 --rc genhtml_legend=1 00:07:15.954 --rc geninfo_all_blocks=1 00:07:15.954 --rc geninfo_unexecuted_blocks=1 00:07:15.954 00:07:15.954 ' 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.954 --rc genhtml_branch_coverage=1 00:07:15.954 --rc genhtml_function_coverage=1 00:07:15.954 --rc genhtml_legend=1 00:07:15.954 --rc geninfo_all_blocks=1 00:07:15.954 --rc geninfo_unexecuted_blocks=1 00:07:15.954 00:07:15.954 ' 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.954 --rc genhtml_branch_coverage=1 00:07:15.954 --rc genhtml_function_coverage=1 00:07:15.954 --rc genhtml_legend=1 00:07:15.954 --rc geninfo_all_blocks=1 00:07:15.954 --rc geninfo_unexecuted_blocks=1 00:07:15.954 00:07:15.954 ' 00:07:15.954 03:56:57 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.954 --rc genhtml_branch_coverage=1 00:07:15.954 --rc genhtml_function_coverage=1 00:07:15.954 --rc genhtml_legend=1 00:07:15.954 --rc geninfo_all_blocks=1 00:07:15.954 --rc geninfo_unexecuted_blocks=1 00:07:15.954 00:07:15.954 ' 00:07:15.955 03:56:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:15.955 03:56:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58340 00:07:15.955 03:56:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:15.955 03:56:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.955 03:56:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58340 00:07:15.955 03:56:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58340 ']' 00:07:15.955 03:56:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.955 03:56:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.955 03:56:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.955 03:56:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.955 03:56:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:15.955 [2024-12-09 03:56:57.811768] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:15.955 [2024-12-09 03:56:57.811910] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58340 ] 00:07:16.224 [2024-12-09 03:56:57.965948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.224 [2024-12-09 03:56:58.059375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.224 [2024-12-09 03:56:58.059516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.224 [2024-12-09 03:56:58.060402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.224 [2024-12-09 03:56:58.060410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.174 03:56:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.174 03:56:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:17.174 03:56:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:17.174 03:56:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:17.174 POWER: Cannot set governor of lcore 0 to userspace 00:07:17.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:17.174 POWER: Cannot set governor of lcore 0 to performance 00:07:17.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:17.174 POWER: Cannot set governor of lcore 0 to userspace 00:07:17.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:17.174 POWER: Cannot set governor of lcore 0 to userspace 00:07:17.174 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:17.174 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:17.174 POWER: Unable to set Power Management Environment for lcore 0 00:07:17.174 [2024-12-09 03:56:58.910971] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:17.174 [2024-12-09 03:56:58.911113] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:17.174 [2024-12-09 03:56:58.911225] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:17.174 [2024-12-09 03:56:58.911325] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:17.174 [2024-12-09 03:56:58.911446] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:17.174 [2024-12-09 03:56:58.911487] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:17.174 03:56:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.174 03:56:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:17.174 03:56:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 [2024-12-09 03:56:58.995190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.174 [2024-12-09 03:56:59.054099] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:17.174 03:56:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.174 03:56:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:17.174 03:56:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.174 03:56:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.174 03:56:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 ************************************ 00:07:17.174 START TEST scheduler_create_thread 00:07:17.174 ************************************ 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 2 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 3 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 4 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 5 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 6 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.174 7 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.174 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.433 8 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.433 9 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.433 10 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.433 03:56:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.811 03:57:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.811 03:57:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:18.811 03:57:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:18.811 03:57:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.811 03:57:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.744 ************************************ 00:07:19.744 END TEST scheduler_create_thread 00:07:19.744 ************************************ 00:07:19.744 03:57:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.744 00:07:19.744 real 0m2.614s 00:07:19.744 user 0m0.016s 00:07:19.744 sys 0m0.002s 00:07:19.744 03:57:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.744 03:57:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.001 03:57:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:20.001 03:57:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58340 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58340 ']' 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58340 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58340 00:07:20.001 killing process with pid 58340 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58340' 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58340 00:07:20.001 03:57:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58340 00:07:20.259 [2024-12-09 03:57:02.160326] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:20.518 00:07:20.518 real 0m4.897s 00:07:20.518 user 0m9.387s 00:07:20.518 sys 0m0.461s 00:07:20.518 03:57:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.518 03:57:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:20.518 ************************************ 00:07:20.518 END TEST event_scheduler 00:07:20.518 ************************************ 00:07:20.778 03:57:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:20.778 03:57:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:20.778 03:57:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.778 03:57:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.778 03:57:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.778 ************************************ 00:07:20.778 START TEST app_repeat 00:07:20.778 ************************************ 00:07:20.778 03:57:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58445 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.778 Process app_repeat pid: 58445 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58445' 00:07:20.778 spdk_app_start Round 0 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:20.778 03:57:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58445 /var/tmp/spdk-nbd.sock 00:07:20.778 03:57:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58445 ']' 00:07:20.778 03:57:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.778 03:57:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.778 03:57:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.778 03:57:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.778 03:57:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:20.778 [2024-12-09 03:57:02.553151] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:20.778 [2024-12-09 03:57:02.553293] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58445 ] 00:07:20.778 [2024-12-09 03:57:02.694904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.040 [2024-12-09 03:57:02.781812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.040 [2024-12-09 03:57:02.781822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.040 [2024-12-09 03:57:02.861024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.040 03:57:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.040 03:57:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:21.040 03:57:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.608 Malloc0 00:07:21.608 03:57:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.608 Malloc1 00:07:21.867 03:57:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.867 03:57:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:22.126 /dev/nbd0 00:07:22.126 03:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.126 03:57:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.126 1+0 records in 00:07:22.126 1+0 records out 00:07:22.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333138 s, 12.3 MB/s 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.126 03:57:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:22.126 03:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.126 03:57:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.126 03:57:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:22.385 /dev/nbd1 00:07:22.385 03:57:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:22.385 03:57:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.385 1+0 records in 00:07:22.385 1+0 records out 00:07:22.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043459 s, 9.4 MB/s 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.385 03:57:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:22.385 03:57:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.385 03:57:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.385 03:57:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.385 03:57:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.385 03:57:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.643 { 00:07:22.643 "nbd_device": "/dev/nbd0", 00:07:22.643 "bdev_name": "Malloc0" 00:07:22.643 }, 00:07:22.643 { 00:07:22.643 "nbd_device": "/dev/nbd1", 00:07:22.643 "bdev_name": "Malloc1" 00:07:22.643 } 00:07:22.643 ]' 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.643 { 00:07:22.643 "nbd_device": "/dev/nbd0", 00:07:22.643 "bdev_name": "Malloc0" 00:07:22.643 }, 00:07:22.643 { 00:07:22.643 "nbd_device": "/dev/nbd1", 00:07:22.643 "bdev_name": "Malloc1" 00:07:22.643 } 00:07:22.643 ]' 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.643 /dev/nbd1' 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.643 /dev/nbd1' 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:22.643 03:57:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.644 03:57:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.644 03:57:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.644 03:57:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.644 03:57:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.644 03:57:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:22.902 256+0 records in 00:07:22.902 256+0 records out 00:07:22.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108351 s, 96.8 MB/s 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.902 256+0 records in 00:07:22.902 256+0 records out 00:07:22.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274489 s, 38.2 MB/s 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:22.902 256+0 records in 00:07:22.902 256+0 records out 00:07:22.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315241 s, 33.3 MB/s 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.902 03:57:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.162 03:57:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.162 03:57:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.420 03:57:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.678 03:57:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.678 03:57:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:24.244 03:57:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:24.503 [2024-12-09 03:57:06.258777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.503 [2024-12-09 03:57:06.327219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.503 [2024-12-09 03:57:06.327228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.503 [2024-12-09 03:57:06.409136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.503 [2024-12-09 03:57:06.409308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:24.503 [2024-12-09 03:57:06.409327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:27.786 spdk_app_start Round 1 00:07:27.786 03:57:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:27.786 03:57:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:27.786 03:57:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58445 /var/tmp/spdk-nbd.sock 00:07:27.786 03:57:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58445 ']' 00:07:27.786 03:57:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:27.786 03:57:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.786 03:57:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:27.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:27.786 03:57:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.786 03:57:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:27.786 03:57:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.786 03:57:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:27.786 03:57:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:27.786 Malloc0 00:07:27.786 03:57:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:28.047 Malloc1 00:07:28.047 03:57:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.047 03:57:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:28.306 /dev/nbd0 00:07:28.306 03:57:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:28.306 03:57:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:28.306 03:57:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:28.306 03:57:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:28.306 03:57:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.306 03:57:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.306 03:57:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:28.306 03:57:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:28.306 03:57:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.306 03:57:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.307 03:57:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:28.307 1+0 records in 00:07:28.307 1+0 records out 00:07:28.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288952 s, 14.2 MB/s 00:07:28.307 03:57:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:28.307 03:57:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:28.307 03:57:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:28.307 03:57:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.307 03:57:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:28.307 03:57:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.307 03:57:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.307 03:57:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:28.565 /dev/nbd1 00:07:28.565 03:57:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:28.565 03:57:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:28.565 1+0 records in 00:07:28.565 1+0 records out 00:07:28.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271701 s, 15.1 MB/s 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.565 03:57:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:28.565 03:57:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.565 03:57:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:28.565 03:57:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.565 03:57:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.565 03:57:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:29.134 { 00:07:29.134 "nbd_device": "/dev/nbd0", 00:07:29.134 "bdev_name": "Malloc0" 00:07:29.134 }, 00:07:29.134 { 00:07:29.134 "nbd_device": "/dev/nbd1", 00:07:29.134 "bdev_name": "Malloc1" 00:07:29.134 } 00:07:29.134 ]' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:29.134 { 00:07:29.134 "nbd_device": "/dev/nbd0", 00:07:29.134 "bdev_name": "Malloc0" 00:07:29.134 }, 00:07:29.134 { 00:07:29.134 "nbd_device": "/dev/nbd1", 00:07:29.134 "bdev_name": "Malloc1" 00:07:29.134 } 00:07:29.134 ]' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:29.134 /dev/nbd1' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:29.134 /dev/nbd1' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:29.134 256+0 records in 00:07:29.134 256+0 records out 00:07:29.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00736549 s, 142 MB/s 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:29.134 256+0 records in 00:07:29.134 256+0 records out 00:07:29.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267682 s, 39.2 MB/s 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:29.134 256+0 records in 00:07:29.134 256+0 records out 00:07:29.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027023 s, 38.8 MB/s 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.134 03:57:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.392 03:57:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.649 03:57:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:30.215 03:57:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:30.215 03:57:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:30.472 03:57:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:30.730 [2024-12-09 03:57:12.459924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.730 [2024-12-09 03:57:12.514916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.730 [2024-12-09 03:57:12.514925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.730 [2024-12-09 03:57:12.592633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.730 [2024-12-09 03:57:12.592760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:30.730 [2024-12-09 03:57:12.592776] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:34.009 spdk_app_start Round 2 00:07:34.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:34.009 03:57:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:34.009 03:57:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:34.009 03:57:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58445 /var/tmp/spdk-nbd.sock 00:07:34.009 03:57:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58445 ']' 00:07:34.009 03:57:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:34.009 03:57:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.009 03:57:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:34.009 03:57:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.009 03:57:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:34.009 03:57:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.009 03:57:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:34.009 03:57:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.009 Malloc0 00:07:34.009 03:57:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.266 Malloc1 00:07:34.266 03:57:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.266 03:57:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:34.522 /dev/nbd0 00:07:34.844 03:57:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:34.844 03:57:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.844 1+0 records in 00:07:34.844 1+0 records out 00:07:34.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750901 s, 5.5 MB/s 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:34.844 03:57:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.844 03:57:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.844 03:57:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:34.844 /dev/nbd1 00:07:34.844 03:57:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:34.844 03:57:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:34.844 03:57:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.844 1+0 records in 00:07:34.844 1+0 records out 00:07:34.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324075 s, 12.6 MB/s 00:07:35.101 03:57:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:35.101 03:57:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:35.101 03:57:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:35.101 03:57:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:35.101 03:57:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:35.101 03:57:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.101 03:57:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:35.101 03:57:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.101 03:57:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.101 03:57:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:35.360 { 00:07:35.360 "nbd_device": "/dev/nbd0", 00:07:35.360 "bdev_name": "Malloc0" 00:07:35.360 }, 00:07:35.360 { 00:07:35.360 "nbd_device": "/dev/nbd1", 00:07:35.360 "bdev_name": "Malloc1" 00:07:35.360 } 00:07:35.360 ]' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:35.360 { 00:07:35.360 "nbd_device": "/dev/nbd0", 00:07:35.360 "bdev_name": "Malloc0" 00:07:35.360 }, 00:07:35.360 { 00:07:35.360 "nbd_device": "/dev/nbd1", 00:07:35.360 "bdev_name": "Malloc1" 00:07:35.360 } 00:07:35.360 ]' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:35.360 /dev/nbd1' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:35.360 /dev/nbd1' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:35.360 256+0 records in 00:07:35.360 256+0 records out 00:07:35.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561724 s, 187 MB/s 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:35.360 256+0 records in 00:07:35.360 256+0 records out 00:07:35.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247373 s, 42.4 MB/s 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:35.360 256+0 records in 00:07:35.360 256+0 records out 00:07:35.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030856 s, 34.0 MB/s 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.360 03:57:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.925 03:57:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.184 03:57:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:36.443 03:57:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:36.443 03:57:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:37.010 03:57:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:37.269 [2024-12-09 03:57:19.068935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.269 [2024-12-09 03:57:19.161738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.269 [2024-12-09 03:57:19.161748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.527 [2024-12-09 03:57:19.245082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.527 [2024-12-09 03:57:19.245215] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:37.527 [2024-12-09 03:57:19.245244] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:40.062 03:57:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58445 /var/tmp/spdk-nbd.sock 00:07:40.062 03:57:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58445 ']' 00:07:40.062 03:57:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:40.062 03:57:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:40.062 03:57:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:40.062 03:57:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.062 03:57:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:40.321 03:57:22 event.app_repeat -- event/event.sh@39 -- # killprocess 58445 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58445 ']' 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58445 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58445 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.321 killing process with pid 58445 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58445' 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58445 00:07:40.321 03:57:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58445 00:07:40.579 spdk_app_start is called in Round 0. 00:07:40.579 Shutdown signal received, stop current app iteration 00:07:40.579 Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 reinitialization... 00:07:40.579 spdk_app_start is called in Round 1. 00:07:40.579 Shutdown signal received, stop current app iteration 00:07:40.579 Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 reinitialization... 00:07:40.579 spdk_app_start is called in Round 2. 00:07:40.579 Shutdown signal received, stop current app iteration 00:07:40.579 Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 reinitialization... 00:07:40.579 spdk_app_start is called in Round 3. 00:07:40.579 Shutdown signal received, stop current app iteration 00:07:40.579 03:57:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:40.579 03:57:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:40.579 00:07:40.579 real 0m19.840s 00:07:40.579 user 0m44.865s 00:07:40.579 sys 0m3.368s 00:07:40.579 03:57:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.579 03:57:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:40.579 ************************************ 00:07:40.579 END TEST app_repeat 00:07:40.579 ************************************ 00:07:40.579 03:57:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:40.579 03:57:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:40.579 03:57:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.579 03:57:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.579 03:57:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:40.579 ************************************ 00:07:40.579 START TEST cpu_locks 00:07:40.579 ************************************ 00:07:40.579 03:57:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:40.579 * Looking for test storage... 00:07:40.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:40.579 03:57:22 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.579 03:57:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.579 03:57:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.839 03:57:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.839 --rc genhtml_branch_coverage=1 00:07:40.839 --rc genhtml_function_coverage=1 00:07:40.839 --rc genhtml_legend=1 00:07:40.839 --rc geninfo_all_blocks=1 00:07:40.839 --rc geninfo_unexecuted_blocks=1 00:07:40.839 00:07:40.839 ' 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.839 --rc genhtml_branch_coverage=1 00:07:40.839 --rc genhtml_function_coverage=1 00:07:40.839 --rc genhtml_legend=1 00:07:40.839 --rc geninfo_all_blocks=1 00:07:40.839 --rc geninfo_unexecuted_blocks=1 00:07:40.839 00:07:40.839 ' 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.839 --rc genhtml_branch_coverage=1 00:07:40.839 --rc genhtml_function_coverage=1 00:07:40.839 --rc genhtml_legend=1 00:07:40.839 --rc geninfo_all_blocks=1 00:07:40.839 --rc geninfo_unexecuted_blocks=1 00:07:40.839 00:07:40.839 ' 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.839 --rc genhtml_branch_coverage=1 00:07:40.839 --rc genhtml_function_coverage=1 00:07:40.839 --rc genhtml_legend=1 00:07:40.839 --rc geninfo_all_blocks=1 00:07:40.839 --rc geninfo_unexecuted_blocks=1 00:07:40.839 00:07:40.839 ' 00:07:40.839 03:57:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:40.839 03:57:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:40.839 03:57:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:40.839 03:57:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.839 03:57:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.839 ************************************ 00:07:40.839 START TEST default_locks 00:07:40.839 ************************************ 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58890 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58890 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58890 ']' 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.839 03:57:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.839 [2024-12-09 03:57:22.689542] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:40.839 [2024-12-09 03:57:22.689703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58890 ] 00:07:41.099 [2024-12-09 03:57:22.843204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.099 [2024-12-09 03:57:22.920830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.099 [2024-12-09 03:57:23.003152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.358 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.358 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:41.358 03:57:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58890 00:07:41.358 03:57:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:41.358 03:57:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58890 00:07:41.927 03:57:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58890 00:07:41.927 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58890 ']' 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58890 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58890 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.928 killing process with pid 58890 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58890' 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58890 00:07:41.928 03:57:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58890 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58890 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58890 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58890 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58890 ']' 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58890) - No such process 00:07:42.186 ERROR: process (pid: 58890) is no longer running 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:42.186 00:07:42.186 real 0m1.501s 00:07:42.186 user 0m1.438s 00:07:42.186 sys 0m0.593s 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.186 03:57:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.186 ************************************ 00:07:42.186 END TEST default_locks 00:07:42.186 ************************************ 00:07:42.444 03:57:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:42.444 03:57:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.444 03:57:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.444 03:57:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.444 ************************************ 00:07:42.444 START TEST default_locks_via_rpc 00:07:42.444 ************************************ 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58935 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58935 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58935 ']' 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.444 03:57:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.444 [2024-12-09 03:57:24.251765] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:42.444 [2024-12-09 03:57:24.251926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58935 ] 00:07:42.703 [2024-12-09 03:57:24.400110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.703 [2024-12-09 03:57:24.472329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.703 [2024-12-09 03:57:24.552786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.641 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58935 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.642 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58935 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58935 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58935 ']' 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58935 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58935 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.901 killing process with pid 58935 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58935' 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58935 00:07:43.901 03:57:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58935 00:07:44.467 00:07:44.467 real 0m2.213s 00:07:44.467 user 0m2.408s 00:07:44.467 sys 0m0.597s 00:07:44.467 03:57:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.467 ************************************ 00:07:44.467 END TEST default_locks_via_rpc 00:07:44.467 ************************************ 00:07:44.467 03:57:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.725 03:57:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:44.725 03:57:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.725 03:57:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.725 03:57:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.725 ************************************ 00:07:44.725 START TEST non_locking_app_on_locked_coremask 00:07:44.725 ************************************ 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58991 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58991 /var/tmp/spdk.sock 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58991 ']' 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.725 03:57:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.725 [2024-12-09 03:57:26.520955] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:44.725 [2024-12-09 03:57:26.521090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:07:44.725 [2024-12-09 03:57:26.662954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.983 [2024-12-09 03:57:26.745563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.983 [2024-12-09 03:57:26.857819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59007 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59007 /var/tmp/spdk2.sock 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59007 ']' 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.915 03:57:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:45.915 [2024-12-09 03:57:27.591810] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:45.915 [2024-12-09 03:57:27.591962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:07:45.915 [2024-12-09 03:57:27.753761] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.915 [2024-12-09 03:57:27.753846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.173 [2024-12-09 03:57:27.933689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.432 [2024-12-09 03:57:28.175252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.000 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.000 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:47.000 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58991 00:07:47.000 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58991 00:07:47.000 03:57:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58991 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58991 ']' 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58991 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58991 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58991' 00:07:47.567 killing process with pid 58991 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58991 00:07:47.567 03:57:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58991 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59007 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59007 ']' 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59007 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59007 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.945 killing process with pid 59007 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59007' 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59007 00:07:48.945 03:57:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59007 00:07:49.517 00:07:49.517 real 0m4.842s 00:07:49.517 user 0m5.089s 00:07:49.517 sys 0m1.422s 00:07:49.517 03:57:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.517 03:57:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.517 ************************************ 00:07:49.517 END TEST non_locking_app_on_locked_coremask 00:07:49.517 ************************************ 00:07:49.517 03:57:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:49.517 03:57:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.517 03:57:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.517 03:57:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.517 ************************************ 00:07:49.517 START TEST locking_app_on_unlocked_coremask 00:07:49.517 ************************************ 00:07:49.517 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:49.517 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59080 00:07:49.517 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59080 /var/tmp/spdk.sock 00:07:49.517 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:49.517 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59080 ']' 00:07:49.517 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.517 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.518 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.518 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.518 03:57:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.518 [2024-12-09 03:57:31.415980] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:49.518 [2024-12-09 03:57:31.416103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59080 ] 00:07:49.776 [2024-12-09 03:57:31.570153] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.776 [2024-12-09 03:57:31.570239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.776 [2024-12-09 03:57:31.658095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.034 [2024-12-09 03:57:31.764446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59096 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59096 /var/tmp/spdk2.sock 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59096 ']' 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.601 03:57:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.601 [2024-12-09 03:57:32.484076] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:50.601 [2024-12-09 03:57:32.484688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59096 ] 00:07:50.859 [2024-12-09 03:57:32.651070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.117 [2024-12-09 03:57:32.811324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.117 [2024-12-09 03:57:33.017183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.690 03:57:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.690 03:57:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:51.690 03:57:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59096 00:07:51.690 03:57:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.690 03:57:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59096 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59080 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59080 ']' 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59080 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59080 00:07:52.623 killing process with pid 59080 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59080' 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59080 00:07:52.623 03:57:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59080 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59096 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59096 ']' 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59096 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59096 00:07:53.995 killing process with pid 59096 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59096' 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59096 00:07:53.995 03:57:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59096 00:07:54.560 ************************************ 00:07:54.560 END TEST locking_app_on_unlocked_coremask 00:07:54.560 ************************************ 00:07:54.560 00:07:54.560 real 0m4.893s 00:07:54.560 user 0m5.210s 00:07:54.560 sys 0m1.465s 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.560 03:57:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:54.560 03:57:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.560 03:57:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.560 03:57:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.560 ************************************ 00:07:54.560 START TEST locking_app_on_locked_coremask 00:07:54.560 ************************************ 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59173 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59173 /var/tmp/spdk.sock 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59173 ']' 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.560 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.560 [2024-12-09 03:57:36.356672] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:54.560 [2024-12-09 03:57:36.356814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:07:54.560 [2024-12-09 03:57:36.498454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.818 [2024-12-09 03:57:36.580592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.818 [2024-12-09 03:57:36.688777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59182 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59182 /var/tmp/spdk2.sock 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59182 /var/tmp/spdk2.sock 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59182 /var/tmp/spdk2.sock 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59182 ']' 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.076 03:57:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.333 [2024-12-09 03:57:37.035264] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:55.333 [2024-12-09 03:57:37.035659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:07:55.333 [2024-12-09 03:57:37.194843] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59173 has claimed it. 00:07:55.333 [2024-12-09 03:57:37.194974] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:55.898 ERROR: process (pid: 59182) is no longer running 00:07:55.898 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59182) - No such process 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59173 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59173 00:07:55.898 03:57:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59173 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59173 ']' 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59173 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59173 00:07:56.465 killing process with pid 59173 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59173' 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59173 00:07:56.465 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59173 00:07:57.032 ************************************ 00:07:57.032 END TEST locking_app_on_locked_coremask 00:07:57.032 ************************************ 00:07:57.032 00:07:57.032 real 0m2.539s 00:07:57.032 user 0m2.708s 00:07:57.032 sys 0m0.732s 00:07:57.032 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.032 03:57:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.032 03:57:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:57.032 03:57:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.032 03:57:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.032 03:57:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.032 ************************************ 00:07:57.032 START TEST locking_overlapped_coremask 00:07:57.032 ************************************ 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59230 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59230 /var/tmp/spdk.sock 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59230 ']' 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.032 03:57:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.032 [2024-12-09 03:57:38.963658] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:57.032 [2024-12-09 03:57:38.963808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59230 ] 00:07:57.291 [2024-12-09 03:57:39.117954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.291 [2024-12-09 03:57:39.214280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.291 [2024-12-09 03:57:39.214346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.291 [2024-12-09 03:57:39.214347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.560 [2024-12-09 03:57:39.323450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59252 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59252 /var/tmp/spdk2.sock 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59252 /var/tmp/spdk2.sock 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59252 /var/tmp/spdk2.sock 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59252 ']' 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.150 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.150 [2024-12-09 03:57:40.084273] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:58.150 [2024-12-09 03:57:40.084406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59252 ] 00:07:58.408 [2024-12-09 03:57:40.245346] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59230 has claimed it. 00:07:58.408 [2024-12-09 03:57:40.249195] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:58.977 ERROR: process (pid: 59252) is no longer running 00:07:58.977 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59252) - No such process 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59230 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59230 ']' 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59230 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59230 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.977 killing process with pid 59230 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59230' 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59230 00:07:58.977 03:57:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59230 00:07:59.545 00:07:59.545 real 0m2.567s 00:07:59.545 user 0m7.188s 00:07:59.545 sys 0m0.590s 00:07:59.545 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.545 03:57:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.545 ************************************ 00:07:59.545 END TEST locking_overlapped_coremask 00:07:59.545 ************************************ 00:07:59.805 03:57:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:59.805 03:57:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.805 03:57:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.805 03:57:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.805 ************************************ 00:07:59.805 START TEST locking_overlapped_coremask_via_rpc 00:07:59.805 ************************************ 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59298 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59298 /var/tmp/spdk.sock 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59298 ']' 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.805 03:57:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.805 [2024-12-09 03:57:41.599320] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:07:59.805 [2024-12-09 03:57:41.600057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59298 ] 00:07:59.805 [2024-12-09 03:57:41.747112] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:59.805 [2024-12-09 03:57:41.747187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.065 [2024-12-09 03:57:41.821009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.065 [2024-12-09 03:57:41.821067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.065 [2024-12-09 03:57:41.821074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.065 [2024-12-09 03:57:41.939725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59318 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59318 /var/tmp/spdk2.sock 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59318 ']' 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.016 03:57:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.016 [2024-12-09 03:57:42.714549] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:01.016 [2024-12-09 03:57:42.714660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59318 ] 00:08:01.016 [2024-12-09 03:57:42.877247] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.016 [2024-12-09 03:57:42.880212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.274 [2024-12-09 03:57:43.045767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.274 [2024-12-09 03:57:43.049307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.274 [2024-12-09 03:57:43.049308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:01.532 [2024-12-09 03:57:43.240006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:02.097 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.098 [2024-12-09 03:57:43.794360] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59298 has claimed it. 00:08:02.098 request: 00:08:02.098 { 00:08:02.098 "method": "framework_enable_cpumask_locks", 00:08:02.098 "req_id": 1 00:08:02.098 } 00:08:02.098 Got JSON-RPC error response 00:08:02.098 response: 00:08:02.098 { 00:08:02.098 "code": -32603, 00:08:02.098 "message": "Failed to claim CPU core: 2" 00:08:02.098 } 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59298 /var/tmp/spdk.sock 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59298 ']' 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.098 03:57:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59318 /var/tmp/spdk2.sock 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59318 ']' 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.356 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.613 ************************************ 00:08:02.613 END TEST locking_overlapped_coremask_via_rpc 00:08:02.613 ************************************ 00:08:02.613 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.613 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:02.613 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:02.613 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.613 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.613 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.613 00:08:02.613 real 0m2.848s 00:08:02.613 user 0m1.560s 00:08:02.613 sys 0m0.227s 00:08:02.613 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.613 03:57:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.613 03:57:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:02.613 03:57:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59298 ]] 00:08:02.613 03:57:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59298 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59298 ']' 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59298 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59298 00:08:02.613 killing process with pid 59298 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59298' 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59298 00:08:02.613 03:57:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59298 00:08:03.267 03:57:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59318 ]] 00:08:03.267 03:57:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59318 00:08:03.267 03:57:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59318 ']' 00:08:03.267 03:57:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59318 00:08:03.267 03:57:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:03.267 03:57:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.267 03:57:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59318 00:08:03.267 killing process with pid 59318 00:08:03.267 03:57:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:03.267 03:57:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:03.267 03:57:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59318' 00:08:03.267 03:57:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59318 00:08:03.267 03:57:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59318 00:08:03.833 03:57:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.833 03:57:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:03.833 03:57:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59298 ]] 00:08:03.833 03:57:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59298 00:08:03.833 03:57:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59298 ']' 00:08:03.833 03:57:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59298 00:08:03.833 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59298) - No such process 00:08:03.833 Process with pid 59298 is not found 00:08:03.833 03:57:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59298 is not found' 00:08:03.833 03:57:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59318 ]] 00:08:03.833 03:57:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59318 00:08:03.833 03:57:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59318 ']' 00:08:03.833 03:57:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59318 00:08:03.833 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59318) - No such process 00:08:03.833 Process with pid 59318 is not found 00:08:03.833 03:57:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59318 is not found' 00:08:03.833 03:57:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:03.833 00:08:03.833 real 0m23.202s 00:08:03.833 user 0m40.226s 00:08:03.833 sys 0m6.785s 00:08:03.833 03:57:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.833 03:57:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.833 ************************************ 00:08:03.833 END TEST cpu_locks 00:08:03.833 ************************************ 00:08:03.833 00:08:03.833 real 0m52.342s 00:08:03.833 user 1m41.062s 00:08:03.833 sys 0m11.072s 00:08:03.833 03:57:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.833 ************************************ 00:08:03.833 03:57:45 event -- common/autotest_common.sh@10 -- # set +x 00:08:03.833 END TEST event 00:08:03.833 ************************************ 00:08:03.833 03:57:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:03.833 03:57:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.833 03:57:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.833 03:57:45 -- common/autotest_common.sh@10 -- # set +x 00:08:03.833 ************************************ 00:08:03.833 START TEST thread 00:08:03.833 ************************************ 00:08:03.833 03:57:45 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:03.833 * Looking for test storage... 00:08:03.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:03.833 03:57:45 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:03.833 03:57:45 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:03.833 03:57:45 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.090 03:57:45 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.090 03:57:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.090 03:57:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.090 03:57:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.090 03:57:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.090 03:57:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.090 03:57:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.090 03:57:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.090 03:57:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.090 03:57:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.090 03:57:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.090 03:57:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.090 03:57:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:04.090 03:57:45 thread -- scripts/common.sh@345 -- # : 1 00:08:04.090 03:57:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.090 03:57:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.090 03:57:45 thread -- scripts/common.sh@365 -- # decimal 1 00:08:04.090 03:57:45 thread -- scripts/common.sh@353 -- # local d=1 00:08:04.090 03:57:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.090 03:57:45 thread -- scripts/common.sh@355 -- # echo 1 00:08:04.090 03:57:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.090 03:57:45 thread -- scripts/common.sh@366 -- # decimal 2 00:08:04.090 03:57:45 thread -- scripts/common.sh@353 -- # local d=2 00:08:04.090 03:57:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.090 03:57:45 thread -- scripts/common.sh@355 -- # echo 2 00:08:04.090 03:57:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.090 03:57:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.090 03:57:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.090 03:57:45 thread -- scripts/common.sh@368 -- # return 0 00:08:04.090 03:57:45 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.090 03:57:45 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.090 --rc genhtml_branch_coverage=1 00:08:04.090 --rc genhtml_function_coverage=1 00:08:04.090 --rc genhtml_legend=1 00:08:04.090 --rc geninfo_all_blocks=1 00:08:04.090 --rc geninfo_unexecuted_blocks=1 00:08:04.090 00:08:04.090 ' 00:08:04.090 03:57:45 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.090 --rc genhtml_branch_coverage=1 00:08:04.090 --rc genhtml_function_coverage=1 00:08:04.090 --rc genhtml_legend=1 00:08:04.090 --rc geninfo_all_blocks=1 00:08:04.090 --rc geninfo_unexecuted_blocks=1 00:08:04.090 00:08:04.090 ' 00:08:04.090 03:57:45 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.090 --rc genhtml_branch_coverage=1 00:08:04.090 --rc genhtml_function_coverage=1 00:08:04.090 --rc genhtml_legend=1 00:08:04.090 --rc geninfo_all_blocks=1 00:08:04.090 --rc geninfo_unexecuted_blocks=1 00:08:04.090 00:08:04.090 ' 00:08:04.090 03:57:45 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.090 --rc genhtml_branch_coverage=1 00:08:04.090 --rc genhtml_function_coverage=1 00:08:04.090 --rc genhtml_legend=1 00:08:04.090 --rc geninfo_all_blocks=1 00:08:04.090 --rc geninfo_unexecuted_blocks=1 00:08:04.090 00:08:04.090 ' 00:08:04.091 03:57:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.091 03:57:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:04.091 03:57:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.091 03:57:45 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.091 ************************************ 00:08:04.091 START TEST thread_poller_perf 00:08:04.091 ************************************ 00:08:04.091 03:57:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.091 [2024-12-09 03:57:45.917746] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:04.091 [2024-12-09 03:57:45.917898] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59454 ] 00:08:04.349 [2024-12-09 03:57:46.070748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.349 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:04.349 [2024-12-09 03:57:46.147018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.284 [2024-12-09T03:57:47.234Z] ====================================== 00:08:05.284 [2024-12-09T03:57:47.234Z] busy:2213438278 (cyc) 00:08:05.284 [2024-12-09T03:57:47.234Z] total_run_count: 336000 00:08:05.284 [2024-12-09T03:57:47.234Z] tsc_hz: 2200000000 (cyc) 00:08:05.284 [2024-12-09T03:57:47.234Z] ====================================== 00:08:05.284 [2024-12-09T03:57:47.234Z] poller_cost: 6587 (cyc), 2994 (nsec) 00:08:05.284 ************************************ 00:08:05.284 END TEST thread_poller_perf 00:08:05.284 ************************************ 00:08:05.284 00:08:05.284 real 0m1.320s 00:08:05.284 user 0m1.160s 00:08:05.284 sys 0m0.052s 00:08:05.284 03:57:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.284 03:57:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:05.542 03:57:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:05.542 03:57:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:05.542 03:57:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.542 03:57:47 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.542 ************************************ 00:08:05.542 START TEST thread_poller_perf 00:08:05.542 ************************************ 00:08:05.542 03:57:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:05.542 [2024-12-09 03:57:47.292739] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:05.542 [2024-12-09 03:57:47.292820] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59488 ] 00:08:05.542 [2024-12-09 03:57:47.434324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.802 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:05.802 [2024-12-09 03:57:47.507452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.737 [2024-12-09T03:57:48.687Z] ====================================== 00:08:06.737 [2024-12-09T03:57:48.687Z] busy:2202366778 (cyc) 00:08:06.737 [2024-12-09T03:57:48.687Z] total_run_count: 4255000 00:08:06.737 [2024-12-09T03:57:48.687Z] tsc_hz: 2200000000 (cyc) 00:08:06.737 [2024-12-09T03:57:48.687Z] ====================================== 00:08:06.737 [2024-12-09T03:57:48.687Z] poller_cost: 517 (cyc), 235 (nsec) 00:08:06.737 00:08:06.737 real 0m1.288s 00:08:06.737 user 0m1.138s 00:08:06.737 sys 0m0.043s 00:08:06.737 03:57:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.737 ************************************ 00:08:06.737 END TEST thread_poller_perf 00:08:06.737 ************************************ 00:08:06.737 03:57:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.737 03:57:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:06.737 ************************************ 00:08:06.737 END TEST thread 00:08:06.737 ************************************ 00:08:06.737 00:08:06.737 real 0m2.913s 00:08:06.737 user 0m2.459s 00:08:06.737 sys 0m0.237s 00:08:06.737 03:57:48 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.737 03:57:48 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.737 03:57:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:06.737 03:57:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:06.737 03:57:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.737 03:57:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.737 03:57:48 -- common/autotest_common.sh@10 -- # set +x 00:08:06.737 ************************************ 00:08:06.738 START TEST app_cmdline 00:08:06.738 ************************************ 00:08:06.738 03:57:48 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:06.996 * Looking for test storage... 00:08:06.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.996 03:57:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.996 --rc genhtml_branch_coverage=1 00:08:06.996 --rc genhtml_function_coverage=1 00:08:06.996 --rc genhtml_legend=1 00:08:06.996 --rc geninfo_all_blocks=1 00:08:06.996 --rc geninfo_unexecuted_blocks=1 00:08:06.996 00:08:06.996 ' 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.996 --rc genhtml_branch_coverage=1 00:08:06.996 --rc genhtml_function_coverage=1 00:08:06.996 --rc genhtml_legend=1 00:08:06.996 --rc geninfo_all_blocks=1 00:08:06.996 --rc geninfo_unexecuted_blocks=1 00:08:06.996 00:08:06.996 ' 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.996 --rc genhtml_branch_coverage=1 00:08:06.996 --rc genhtml_function_coverage=1 00:08:06.996 --rc genhtml_legend=1 00:08:06.996 --rc geninfo_all_blocks=1 00:08:06.996 --rc geninfo_unexecuted_blocks=1 00:08:06.996 00:08:06.996 ' 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.996 --rc genhtml_branch_coverage=1 00:08:06.996 --rc genhtml_function_coverage=1 00:08:06.996 --rc genhtml_legend=1 00:08:06.996 --rc geninfo_all_blocks=1 00:08:06.996 --rc geninfo_unexecuted_blocks=1 00:08:06.996 00:08:06.996 ' 00:08:06.996 03:57:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:06.996 03:57:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59572 00:08:06.996 03:57:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59572 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59572 ']' 00:08:06.996 03:57:48 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.996 03:57:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:06.996 [2024-12-09 03:57:48.922402] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:06.996 [2024-12-09 03:57:48.922545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59572 ] 00:08:07.255 [2024-12-09 03:57:49.068038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.255 [2024-12-09 03:57:49.147313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.514 [2024-12-09 03:57:49.249319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.773 03:57:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.773 03:57:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:07.773 03:57:49 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:08.032 { 00:08:08.032 "version": "SPDK v25.01-pre git sha1 5f032e8b7", 00:08:08.032 "fields": { 00:08:08.032 "major": 25, 00:08:08.032 "minor": 1, 00:08:08.032 "patch": 0, 00:08:08.032 "suffix": "-pre", 00:08:08.032 "commit": "5f032e8b7" 00:08:08.032 } 00:08:08.032 } 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:08.032 03:57:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.032 03:57:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.033 03:57:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.033 03:57:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.033 03:57:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.033 03:57:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:08.033 03:57:49 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:08.293 request: 00:08:08.293 { 00:08:08.293 "method": "env_dpdk_get_mem_stats", 00:08:08.293 "req_id": 1 00:08:08.293 } 00:08:08.293 Got JSON-RPC error response 00:08:08.293 response: 00:08:08.293 { 00:08:08.293 "code": -32601, 00:08:08.293 "message": "Method not found" 00:08:08.293 } 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.293 03:57:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59572 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59572 ']' 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59572 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59572 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.293 killing process with pid 59572 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59572' 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@973 -- # kill 59572 00:08:08.293 03:57:50 app_cmdline -- common/autotest_common.sh@978 -- # wait 59572 00:08:08.863 00:08:08.863 real 0m2.095s 00:08:08.863 user 0m2.414s 00:08:08.863 sys 0m0.595s 00:08:08.863 03:57:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.863 03:57:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:08.863 ************************************ 00:08:08.863 END TEST app_cmdline 00:08:08.863 ************************************ 00:08:08.863 03:57:50 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:08.863 03:57:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.863 03:57:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.863 03:57:50 -- common/autotest_common.sh@10 -- # set +x 00:08:09.124 ************************************ 00:08:09.124 START TEST version 00:08:09.124 ************************************ 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:09.124 * Looking for test storage... 00:08:09.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:09.124 03:57:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.124 03:57:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.124 03:57:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.124 03:57:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.124 03:57:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.124 03:57:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.124 03:57:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.124 03:57:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.124 03:57:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.124 03:57:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.124 03:57:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.124 03:57:50 version -- scripts/common.sh@344 -- # case "$op" in 00:08:09.124 03:57:50 version -- scripts/common.sh@345 -- # : 1 00:08:09.124 03:57:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.124 03:57:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.124 03:57:50 version -- scripts/common.sh@365 -- # decimal 1 00:08:09.124 03:57:50 version -- scripts/common.sh@353 -- # local d=1 00:08:09.124 03:57:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.124 03:57:50 version -- scripts/common.sh@355 -- # echo 1 00:08:09.124 03:57:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.124 03:57:50 version -- scripts/common.sh@366 -- # decimal 2 00:08:09.124 03:57:50 version -- scripts/common.sh@353 -- # local d=2 00:08:09.124 03:57:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.124 03:57:50 version -- scripts/common.sh@355 -- # echo 2 00:08:09.124 03:57:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.124 03:57:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.124 03:57:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.124 03:57:50 version -- scripts/common.sh@368 -- # return 0 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:09.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.124 --rc genhtml_branch_coverage=1 00:08:09.124 --rc genhtml_function_coverage=1 00:08:09.124 --rc genhtml_legend=1 00:08:09.124 --rc geninfo_all_blocks=1 00:08:09.124 --rc geninfo_unexecuted_blocks=1 00:08:09.124 00:08:09.124 ' 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:09.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.124 --rc genhtml_branch_coverage=1 00:08:09.124 --rc genhtml_function_coverage=1 00:08:09.124 --rc genhtml_legend=1 00:08:09.124 --rc geninfo_all_blocks=1 00:08:09.124 --rc geninfo_unexecuted_blocks=1 00:08:09.124 00:08:09.124 ' 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:09.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.124 --rc genhtml_branch_coverage=1 00:08:09.124 --rc genhtml_function_coverage=1 00:08:09.124 --rc genhtml_legend=1 00:08:09.124 --rc geninfo_all_blocks=1 00:08:09.124 --rc geninfo_unexecuted_blocks=1 00:08:09.124 00:08:09.124 ' 00:08:09.124 03:57:50 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:09.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.124 --rc genhtml_branch_coverage=1 00:08:09.124 --rc genhtml_function_coverage=1 00:08:09.124 --rc genhtml_legend=1 00:08:09.124 --rc geninfo_all_blocks=1 00:08:09.124 --rc geninfo_unexecuted_blocks=1 00:08:09.124 00:08:09.124 ' 00:08:09.124 03:57:50 version -- app/version.sh@17 -- # get_header_version major 00:08:09.124 03:57:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:09.124 03:57:50 version -- app/version.sh@14 -- # cut -f2 00:08:09.124 03:57:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.124 03:57:51 version -- app/version.sh@17 -- # major=25 00:08:09.125 03:57:51 version -- app/version.sh@18 -- # get_header_version minor 00:08:09.125 03:57:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:09.125 03:57:51 version -- app/version.sh@14 -- # cut -f2 00:08:09.125 03:57:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.125 03:57:51 version -- app/version.sh@18 -- # minor=1 00:08:09.125 03:57:51 version -- app/version.sh@19 -- # get_header_version patch 00:08:09.125 03:57:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:09.125 03:57:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.125 03:57:51 version -- app/version.sh@14 -- # cut -f2 00:08:09.125 03:57:51 version -- app/version.sh@19 -- # patch=0 00:08:09.125 03:57:51 version -- app/version.sh@20 -- # get_header_version suffix 00:08:09.125 03:57:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:09.125 03:57:51 version -- app/version.sh@14 -- # tr -d '"' 00:08:09.125 03:57:51 version -- app/version.sh@14 -- # cut -f2 00:08:09.125 03:57:51 version -- app/version.sh@20 -- # suffix=-pre 00:08:09.125 03:57:51 version -- app/version.sh@22 -- # version=25.1 00:08:09.125 03:57:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:09.125 03:57:51 version -- app/version.sh@28 -- # version=25.1rc0 00:08:09.125 03:57:51 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:09.125 03:57:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:09.125 03:57:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:09.125 03:57:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:09.125 00:08:09.125 real 0m0.250s 00:08:09.125 user 0m0.154s 00:08:09.125 sys 0m0.131s 00:08:09.125 03:57:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.125 03:57:51 version -- common/autotest_common.sh@10 -- # set +x 00:08:09.125 ************************************ 00:08:09.125 END TEST version 00:08:09.125 ************************************ 00:08:09.383 03:57:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:09.383 03:57:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:09.383 03:57:51 -- spdk/autotest.sh@194 -- # uname -s 00:08:09.383 03:57:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:09.383 03:57:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:09.383 03:57:51 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:08:09.383 03:57:51 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:08:09.383 03:57:51 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:09.383 03:57:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.383 03:57:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.383 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:08:09.383 ************************************ 00:08:09.383 START TEST spdk_dd 00:08:09.383 ************************************ 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:09.383 * Looking for test storage... 00:08:09.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@345 -- # : 1 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@368 -- # return 0 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:09.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.383 --rc genhtml_branch_coverage=1 00:08:09.383 --rc genhtml_function_coverage=1 00:08:09.383 --rc genhtml_legend=1 00:08:09.383 --rc geninfo_all_blocks=1 00:08:09.383 --rc geninfo_unexecuted_blocks=1 00:08:09.383 00:08:09.383 ' 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:09.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.383 --rc genhtml_branch_coverage=1 00:08:09.383 --rc genhtml_function_coverage=1 00:08:09.383 --rc genhtml_legend=1 00:08:09.383 --rc geninfo_all_blocks=1 00:08:09.383 --rc geninfo_unexecuted_blocks=1 00:08:09.383 00:08:09.383 ' 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:09.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.383 --rc genhtml_branch_coverage=1 00:08:09.383 --rc genhtml_function_coverage=1 00:08:09.383 --rc genhtml_legend=1 00:08:09.383 --rc geninfo_all_blocks=1 00:08:09.383 --rc geninfo_unexecuted_blocks=1 00:08:09.383 00:08:09.383 ' 00:08:09.383 03:57:51 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:09.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.383 --rc genhtml_branch_coverage=1 00:08:09.383 --rc genhtml_function_coverage=1 00:08:09.383 --rc genhtml_legend=1 00:08:09.383 --rc geninfo_all_blocks=1 00:08:09.383 --rc geninfo_unexecuted_blocks=1 00:08:09.383 00:08:09.383 ' 00:08:09.383 03:57:51 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.383 03:57:51 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.383 03:57:51 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.383 03:57:51 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.383 03:57:51 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.383 03:57:51 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:09.383 03:57:51 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.383 03:57:51 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:09.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:09.950 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:09.950 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:09.950 03:57:51 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:09.950 03:57:51 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@233 -- # local class 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@235 -- # local progif 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@236 -- # class=01 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:09.950 03:57:51 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:08:09.951 03:57:51 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:09.951 03:57:51 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.951 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:09.952 * spdk_dd linked to liburing 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:09.952 03:57:51 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:09.952 03:57:51 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:09.952 03:57:51 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:09.952 03:57:51 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:09.952 03:57:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:09.952 03:57:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.952 03:57:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:09.952 ************************************ 00:08:09.952 START TEST spdk_dd_basic_rw 00:08:09.952 ************************************ 00:08:09.952 03:57:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:10.212 * Looking for test storage... 00:08:10.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:10.212 03:57:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:10.212 03:57:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:08:10.212 03:57:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:08:10.212 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:10.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.213 --rc genhtml_branch_coverage=1 00:08:10.213 --rc genhtml_function_coverage=1 00:08:10.213 --rc genhtml_legend=1 00:08:10.213 --rc geninfo_all_blocks=1 00:08:10.213 --rc geninfo_unexecuted_blocks=1 00:08:10.213 00:08:10.213 ' 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:10.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.213 --rc genhtml_branch_coverage=1 00:08:10.213 --rc genhtml_function_coverage=1 00:08:10.213 --rc genhtml_legend=1 00:08:10.213 --rc geninfo_all_blocks=1 00:08:10.213 --rc geninfo_unexecuted_blocks=1 00:08:10.213 00:08:10.213 ' 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:10.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.213 --rc genhtml_branch_coverage=1 00:08:10.213 --rc genhtml_function_coverage=1 00:08:10.213 --rc genhtml_legend=1 00:08:10.213 --rc geninfo_all_blocks=1 00:08:10.213 --rc geninfo_unexecuted_blocks=1 00:08:10.213 00:08:10.213 ' 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:10.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.213 --rc genhtml_branch_coverage=1 00:08:10.213 --rc genhtml_function_coverage=1 00:08:10.213 --rc genhtml_legend=1 00:08:10.213 --rc geninfo_all_blocks=1 00:08:10.213 --rc geninfo_unexecuted_blocks=1 00:08:10.213 00:08:10.213 ' 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:10.213 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:10.474 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:10.474 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.475 ************************************ 00:08:10.475 START TEST dd_bs_lt_native_bs 00:08:10.475 ************************************ 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.475 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:10.475 { 00:08:10.475 "subsystems": [ 00:08:10.475 { 00:08:10.475 "subsystem": "bdev", 00:08:10.475 "config": [ 00:08:10.475 { 00:08:10.475 "params": { 00:08:10.475 "trtype": "pcie", 00:08:10.475 "traddr": "0000:00:10.0", 00:08:10.475 "name": "Nvme0" 00:08:10.475 }, 00:08:10.475 "method": "bdev_nvme_attach_controller" 00:08:10.475 }, 00:08:10.475 { 00:08:10.475 "method": "bdev_wait_for_examine" 00:08:10.475 } 00:08:10.475 ] 00:08:10.475 } 00:08:10.475 ] 00:08:10.475 } 00:08:10.475 [2024-12-09 03:57:52.325086] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:10.475 [2024-12-09 03:57:52.325269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59916 ] 00:08:10.733 [2024-12-09 03:57:52.474940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.733 [2024-12-09 03:57:52.543026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.733 [2024-12-09 03:57:52.603294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.991 [2024-12-09 03:57:52.717357] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:10.991 [2024-12-09 03:57:52.717421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.991 [2024-12-09 03:57:52.861684] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:10.991 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:08:10.991 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.991 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:08:10.991 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:08:10.991 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:08:10.991 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.991 00:08:10.991 real 0m0.672s 00:08:10.991 user 0m0.458s 00:08:10.991 sys 0m0.172s 00:08:10.991 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.991 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:10.991 ************************************ 00:08:10.991 END TEST dd_bs_lt_native_bs 00:08:10.991 ************************************ 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.250 ************************************ 00:08:11.250 START TEST dd_rw 00:08:11.250 ************************************ 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:11.250 03:57:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.814 03:57:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:11.814 03:57:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:11.814 03:57:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.814 03:57:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.814 [2024-12-09 03:57:53.694776] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:11.814 [2024-12-09 03:57:53.694884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59952 ] 00:08:11.814 { 00:08:11.814 "subsystems": [ 00:08:11.814 { 00:08:11.814 "subsystem": "bdev", 00:08:11.814 "config": [ 00:08:11.814 { 00:08:11.814 "params": { 00:08:11.814 "trtype": "pcie", 00:08:11.814 "traddr": "0000:00:10.0", 00:08:11.814 "name": "Nvme0" 00:08:11.814 }, 00:08:11.814 "method": "bdev_nvme_attach_controller" 00:08:11.814 }, 00:08:11.814 { 00:08:11.814 "method": "bdev_wait_for_examine" 00:08:11.814 } 00:08:11.814 ] 00:08:11.814 } 00:08:11.814 ] 00:08:11.814 } 00:08:12.075 [2024-12-09 03:57:53.846811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.075 [2024-12-09 03:57:53.915249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.075 [2024-12-09 03:57:53.975504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.332  [2024-12-09T03:57:54.540Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:12.590 00:08:12.590 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:12.590 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:12.590 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.590 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.590 [2024-12-09 03:57:54.346788] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:12.590 [2024-12-09 03:57:54.346869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59966 ] 00:08:12.590 { 00:08:12.590 "subsystems": [ 00:08:12.590 { 00:08:12.590 "subsystem": "bdev", 00:08:12.590 "config": [ 00:08:12.590 { 00:08:12.590 "params": { 00:08:12.590 "trtype": "pcie", 00:08:12.590 "traddr": "0000:00:10.0", 00:08:12.590 "name": "Nvme0" 00:08:12.590 }, 00:08:12.590 "method": "bdev_nvme_attach_controller" 00:08:12.590 }, 00:08:12.590 { 00:08:12.590 "method": "bdev_wait_for_examine" 00:08:12.590 } 00:08:12.590 ] 00:08:12.590 } 00:08:12.590 ] 00:08:12.590 } 00:08:12.590 [2024-12-09 03:57:54.489922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.861 [2024-12-09 03:57:54.541585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.862 [2024-12-09 03:57:54.595794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.862  [2024-12-09T03:57:55.069Z] Copying: 60/60 [kB] (average 14 MBps) 00:08:13.119 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:13.120 03:57:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.120 [2024-12-09 03:57:54.975887] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:13.120 [2024-12-09 03:57:54.975998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59987 ] 00:08:13.120 { 00:08:13.120 "subsystems": [ 00:08:13.120 { 00:08:13.120 "subsystem": "bdev", 00:08:13.120 "config": [ 00:08:13.120 { 00:08:13.120 "params": { 00:08:13.120 "trtype": "pcie", 00:08:13.120 "traddr": "0000:00:10.0", 00:08:13.120 "name": "Nvme0" 00:08:13.120 }, 00:08:13.120 "method": "bdev_nvme_attach_controller" 00:08:13.120 }, 00:08:13.120 { 00:08:13.120 "method": "bdev_wait_for_examine" 00:08:13.120 } 00:08:13.120 ] 00:08:13.120 } 00:08:13.120 ] 00:08:13.120 } 00:08:13.377 [2024-12-09 03:57:55.121850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.377 [2024-12-09 03:57:55.178841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.377 [2024-12-09 03:57:55.234955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.634  [2024-12-09T03:57:55.584Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:13.634 00:08:13.634 03:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:13.634 03:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:13.634 03:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:13.634 03:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:13.634 03:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:13.634 03:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:13.634 03:57:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.199 03:57:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:14.199 03:57:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:14.199 03:57:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.199 03:57:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.457 [2024-12-09 03:57:56.168161] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:14.457 [2024-12-09 03:57:56.168305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60006 ] 00:08:14.457 { 00:08:14.457 "subsystems": [ 00:08:14.457 { 00:08:14.457 "subsystem": "bdev", 00:08:14.458 "config": [ 00:08:14.458 { 00:08:14.458 "params": { 00:08:14.458 "trtype": "pcie", 00:08:14.458 "traddr": "0000:00:10.0", 00:08:14.458 "name": "Nvme0" 00:08:14.458 }, 00:08:14.458 "method": "bdev_nvme_attach_controller" 00:08:14.458 }, 00:08:14.458 { 00:08:14.458 "method": "bdev_wait_for_examine" 00:08:14.458 } 00:08:14.458 ] 00:08:14.458 } 00:08:14.458 ] 00:08:14.458 } 00:08:14.458 [2024-12-09 03:57:56.315677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.458 [2024-12-09 03:57:56.368993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.715 [2024-12-09 03:57:56.426333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.715  [2024-12-09T03:57:56.923Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:14.973 00:08:14.973 03:57:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:14.973 03:57:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:14.973 03:57:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.973 03:57:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.973 [2024-12-09 03:57:56.794784] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:14.973 [2024-12-09 03:57:56.794880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60022 ] 00:08:14.973 { 00:08:14.973 "subsystems": [ 00:08:14.973 { 00:08:14.973 "subsystem": "bdev", 00:08:14.973 "config": [ 00:08:14.973 { 00:08:14.973 "params": { 00:08:14.973 "trtype": "pcie", 00:08:14.973 "traddr": "0000:00:10.0", 00:08:14.973 "name": "Nvme0" 00:08:14.973 }, 00:08:14.973 "method": "bdev_nvme_attach_controller" 00:08:14.973 }, 00:08:14.973 { 00:08:14.973 "method": "bdev_wait_for_examine" 00:08:14.973 } 00:08:14.973 ] 00:08:14.973 } 00:08:14.973 ] 00:08:14.973 } 00:08:15.245 [2024-12-09 03:57:56.938583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.245 [2024-12-09 03:57:57.006853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.245 [2024-12-09 03:57:57.068252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.513  [2024-12-09T03:57:57.463Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:15.513 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:15.513 03:57:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:15.513 [2024-12-09 03:57:57.448836] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:15.513 [2024-12-09 03:57:57.449490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ] 00:08:15.513 { 00:08:15.513 "subsystems": [ 00:08:15.513 { 00:08:15.513 "subsystem": "bdev", 00:08:15.513 "config": [ 00:08:15.513 { 00:08:15.513 "params": { 00:08:15.513 "trtype": "pcie", 00:08:15.513 "traddr": "0000:00:10.0", 00:08:15.513 "name": "Nvme0" 00:08:15.513 }, 00:08:15.513 "method": "bdev_nvme_attach_controller" 00:08:15.513 }, 00:08:15.513 { 00:08:15.513 "method": "bdev_wait_for_examine" 00:08:15.513 } 00:08:15.513 ] 00:08:15.513 } 00:08:15.513 ] 00:08:15.513 } 00:08:15.771 [2024-12-09 03:57:57.595317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.771 [2024-12-09 03:57:57.657804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.771 [2024-12-09 03:57:57.714370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.029  [2024-12-09T03:57:58.237Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:16.287 00:08:16.287 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:16.287 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:16.287 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:16.287 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:16.287 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:16.287 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:16.287 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:16.287 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.854 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:16.854 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:16.854 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:16.854 03:57:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:16.854 { 00:08:16.854 "subsystems": [ 00:08:16.854 { 00:08:16.854 "subsystem": "bdev", 00:08:16.854 "config": [ 00:08:16.854 { 00:08:16.854 "params": { 00:08:16.854 "trtype": "pcie", 00:08:16.854 "traddr": "0000:00:10.0", 00:08:16.854 "name": "Nvme0" 00:08:16.854 }, 00:08:16.854 "method": "bdev_nvme_attach_controller" 00:08:16.854 }, 00:08:16.854 { 00:08:16.854 "method": "bdev_wait_for_examine" 00:08:16.854 } 00:08:16.854 ] 00:08:16.854 } 00:08:16.854 ] 00:08:16.854 } 00:08:16.854 [2024-12-09 03:57:58.676703] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:16.854 [2024-12-09 03:57:58.676878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:08:17.114 [2024-12-09 03:57:58.834336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.114 [2024-12-09 03:57:58.889663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.114 [2024-12-09 03:57:58.948344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.114  [2024-12-09T03:57:59.323Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:17.373 00:08:17.373 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:17.373 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:17.373 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:17.373 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:17.373 [2024-12-09 03:57:59.314775] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:17.373 [2024-12-09 03:57:59.314894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60075 ] 00:08:17.632 { 00:08:17.632 "subsystems": [ 00:08:17.632 { 00:08:17.632 "subsystem": "bdev", 00:08:17.632 "config": [ 00:08:17.632 { 00:08:17.632 "params": { 00:08:17.632 "trtype": "pcie", 00:08:17.632 "traddr": "0000:00:10.0", 00:08:17.632 "name": "Nvme0" 00:08:17.632 }, 00:08:17.632 "method": "bdev_nvme_attach_controller" 00:08:17.632 }, 00:08:17.632 { 00:08:17.632 "method": "bdev_wait_for_examine" 00:08:17.632 } 00:08:17.632 ] 00:08:17.632 } 00:08:17.632 ] 00:08:17.632 } 00:08:17.632 [2024-12-09 03:57:59.459208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.632 [2024-12-09 03:57:59.510961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.632 [2024-12-09 03:57:59.568777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.890  [2024-12-09T03:58:00.099Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:18.149 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:18.149 03:57:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:18.149 { 00:08:18.149 "subsystems": [ 00:08:18.149 { 00:08:18.149 "subsystem": "bdev", 00:08:18.149 "config": [ 00:08:18.149 { 00:08:18.149 "params": { 00:08:18.149 "trtype": "pcie", 00:08:18.149 "traddr": "0000:00:10.0", 00:08:18.149 "name": "Nvme0" 00:08:18.149 }, 00:08:18.149 "method": "bdev_nvme_attach_controller" 00:08:18.149 }, 00:08:18.149 { 00:08:18.149 "method": "bdev_wait_for_examine" 00:08:18.149 } 00:08:18.149 ] 00:08:18.149 } 00:08:18.149 ] 00:08:18.149 } 00:08:18.149 [2024-12-09 03:57:59.956014] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:18.149 [2024-12-09 03:57:59.956139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60096 ] 00:08:18.408 [2024-12-09 03:58:00.106097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.408 [2024-12-09 03:58:00.156313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.408 [2024-12-09 03:58:00.213224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.408  [2024-12-09T03:58:00.617Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:18.667 00:08:18.667 03:58:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:18.667 03:58:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:18.667 03:58:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:18.667 03:58:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:18.667 03:58:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:18.667 03:58:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:18.667 03:58:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:19.233 03:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:19.234 03:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:19.234 03:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:19.234 03:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:19.234 [2024-12-09 03:58:01.085030] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:19.234 [2024-12-09 03:58:01.085159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60115 ] 00:08:19.234 { 00:08:19.234 "subsystems": [ 00:08:19.234 { 00:08:19.234 "subsystem": "bdev", 00:08:19.234 "config": [ 00:08:19.234 { 00:08:19.234 "params": { 00:08:19.234 "trtype": "pcie", 00:08:19.234 "traddr": "0000:00:10.0", 00:08:19.234 "name": "Nvme0" 00:08:19.234 }, 00:08:19.234 "method": "bdev_nvme_attach_controller" 00:08:19.234 }, 00:08:19.234 { 00:08:19.234 "method": "bdev_wait_for_examine" 00:08:19.234 } 00:08:19.234 ] 00:08:19.234 } 00:08:19.234 ] 00:08:19.234 } 00:08:19.491 [2024-12-09 03:58:01.240375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.491 [2024-12-09 03:58:01.311462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.491 [2024-12-09 03:58:01.372430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.747  [2024-12-09T03:58:01.697Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:19.747 00:08:19.747 03:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:19.747 03:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:19.747 03:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:19.747 03:58:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:20.004 [2024-12-09 03:58:01.749389] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:20.004 [2024-12-09 03:58:01.749516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60123 ] 00:08:20.004 { 00:08:20.004 "subsystems": [ 00:08:20.004 { 00:08:20.004 "subsystem": "bdev", 00:08:20.004 "config": [ 00:08:20.004 { 00:08:20.004 "params": { 00:08:20.004 "trtype": "pcie", 00:08:20.004 "traddr": "0000:00:10.0", 00:08:20.004 "name": "Nvme0" 00:08:20.004 }, 00:08:20.004 "method": "bdev_nvme_attach_controller" 00:08:20.004 }, 00:08:20.004 { 00:08:20.004 "method": "bdev_wait_for_examine" 00:08:20.004 } 00:08:20.004 ] 00:08:20.004 } 00:08:20.004 ] 00:08:20.004 } 00:08:20.004 [2024-12-09 03:58:01.898196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.261 [2024-12-09 03:58:01.962109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.261 [2024-12-09 03:58:02.020370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.261  [2024-12-09T03:58:02.468Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:20.518 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:20.518 03:58:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:20.518 { 00:08:20.518 "subsystems": [ 00:08:20.518 { 00:08:20.518 "subsystem": "bdev", 00:08:20.518 "config": [ 00:08:20.518 { 00:08:20.518 "params": { 00:08:20.518 "trtype": "pcie", 00:08:20.518 "traddr": "0000:00:10.0", 00:08:20.518 "name": "Nvme0" 00:08:20.518 }, 00:08:20.518 "method": "bdev_nvme_attach_controller" 00:08:20.518 }, 00:08:20.518 { 00:08:20.518 "method": "bdev_wait_for_examine" 00:08:20.518 } 00:08:20.518 ] 00:08:20.518 } 00:08:20.518 ] 00:08:20.518 } 00:08:20.518 [2024-12-09 03:58:02.421377] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:20.518 [2024-12-09 03:58:02.421494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60144 ] 00:08:20.775 [2024-12-09 03:58:02.565716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.775 [2024-12-09 03:58:02.634019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.775 [2024-12-09 03:58:02.690518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.033  [2024-12-09T03:58:03.242Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:21.292 00:08:21.292 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:21.292 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:21.292 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:21.292 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:21.292 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:21.292 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:21.292 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:21.292 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:21.567 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:21.567 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:21.567 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:21.567 03:58:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:21.825 [2024-12-09 03:58:03.551486] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:21.825 [2024-12-09 03:58:03.551587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:08:21.825 { 00:08:21.825 "subsystems": [ 00:08:21.825 { 00:08:21.825 "subsystem": "bdev", 00:08:21.825 "config": [ 00:08:21.825 { 00:08:21.825 "params": { 00:08:21.825 "trtype": "pcie", 00:08:21.825 "traddr": "0000:00:10.0", 00:08:21.825 "name": "Nvme0" 00:08:21.825 }, 00:08:21.825 "method": "bdev_nvme_attach_controller" 00:08:21.825 }, 00:08:21.825 { 00:08:21.825 "method": "bdev_wait_for_examine" 00:08:21.825 } 00:08:21.825 ] 00:08:21.825 } 00:08:21.825 ] 00:08:21.825 } 00:08:21.825 [2024-12-09 03:58:03.693225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.825 [2024-12-09 03:58:03.761823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.083 [2024-12-09 03:58:03.822573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.083  [2024-12-09T03:58:04.290Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:22.340 00:08:22.340 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:22.340 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:22.340 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:22.340 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:22.340 { 00:08:22.340 "subsystems": [ 00:08:22.340 { 00:08:22.340 "subsystem": "bdev", 00:08:22.340 "config": [ 00:08:22.340 { 00:08:22.340 "params": { 00:08:22.340 "trtype": "pcie", 00:08:22.340 "traddr": "0000:00:10.0", 00:08:22.340 "name": "Nvme0" 00:08:22.340 }, 00:08:22.340 "method": "bdev_nvme_attach_controller" 00:08:22.340 }, 00:08:22.340 { 00:08:22.340 "method": "bdev_wait_for_examine" 00:08:22.340 } 00:08:22.340 ] 00:08:22.340 } 00:08:22.340 ] 00:08:22.340 } 00:08:22.340 [2024-12-09 03:58:04.221836] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:22.340 [2024-12-09 03:58:04.222048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60183 ] 00:08:22.598 [2024-12-09 03:58:04.379355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.598 [2024-12-09 03:58:04.436570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.598 [2024-12-09 03:58:04.496743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.873  [2024-12-09T03:58:04.823Z] Copying: 48/48 [kB] (average 23 MBps) 00:08:22.873 00:08:22.873 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:23.131 03:58:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:23.131 [2024-12-09 03:58:04.876155] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:23.131 [2024-12-09 03:58:04.876303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60193 ] 00:08:23.131 { 00:08:23.131 "subsystems": [ 00:08:23.131 { 00:08:23.131 "subsystem": "bdev", 00:08:23.131 "config": [ 00:08:23.131 { 00:08:23.131 "params": { 00:08:23.131 "trtype": "pcie", 00:08:23.131 "traddr": "0000:00:10.0", 00:08:23.131 "name": "Nvme0" 00:08:23.131 }, 00:08:23.131 "method": "bdev_nvme_attach_controller" 00:08:23.131 }, 00:08:23.131 { 00:08:23.131 "method": "bdev_wait_for_examine" 00:08:23.131 } 00:08:23.131 ] 00:08:23.131 } 00:08:23.131 ] 00:08:23.131 } 00:08:23.131 [2024-12-09 03:58:05.019520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.389 [2024-12-09 03:58:05.080800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.389 [2024-12-09 03:58:05.141373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.389  [2024-12-09T03:58:05.596Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:23.646 00:08:23.646 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:23.646 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:23.646 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:23.647 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:23.647 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:23.647 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:23.647 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.213 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:24.213 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:24.213 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:24.213 03:58:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.213 { 00:08:24.213 "subsystems": [ 00:08:24.213 { 00:08:24.213 "subsystem": "bdev", 00:08:24.213 "config": [ 00:08:24.213 { 00:08:24.213 "params": { 00:08:24.213 "trtype": "pcie", 00:08:24.213 "traddr": "0000:00:10.0", 00:08:24.213 "name": "Nvme0" 00:08:24.213 }, 00:08:24.213 "method": "bdev_nvme_attach_controller" 00:08:24.213 }, 00:08:24.213 { 00:08:24.213 "method": "bdev_wait_for_examine" 00:08:24.213 } 00:08:24.213 ] 00:08:24.213 } 00:08:24.213 ] 00:08:24.213 } 00:08:24.213 [2024-12-09 03:58:06.002367] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:24.213 [2024-12-09 03:58:06.002509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60212 ] 00:08:24.213 [2024-12-09 03:58:06.155704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.471 [2024-12-09 03:58:06.221413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.471 [2024-12-09 03:58:06.281680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.471  [2024-12-09T03:58:06.680Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:24.730 00:08:24.730 03:58:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:24.730 03:58:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:24.730 03:58:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:24.730 03:58:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:24.730 { 00:08:24.730 "subsystems": [ 00:08:24.730 { 00:08:24.730 "subsystem": "bdev", 00:08:24.730 "config": [ 00:08:24.730 { 00:08:24.730 "params": { 00:08:24.730 "trtype": "pcie", 00:08:24.730 "traddr": "0000:00:10.0", 00:08:24.730 "name": "Nvme0" 00:08:24.730 }, 00:08:24.730 "method": "bdev_nvme_attach_controller" 00:08:24.730 }, 00:08:24.730 { 00:08:24.730 "method": "bdev_wait_for_examine" 00:08:24.730 } 00:08:24.730 ] 00:08:24.730 } 00:08:24.730 ] 00:08:24.730 } 00:08:24.730 [2024-12-09 03:58:06.674984] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:24.730 [2024-12-09 03:58:06.675094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60231 ] 00:08:24.988 [2024-12-09 03:58:06.822138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.988 [2024-12-09 03:58:06.887697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.248 [2024-12-09 03:58:06.946949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.248  [2024-12-09T03:58:07.456Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:25.506 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:25.506 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:25.506 { 00:08:25.506 "subsystems": [ 00:08:25.506 { 00:08:25.506 "subsystem": "bdev", 00:08:25.506 "config": [ 00:08:25.506 { 00:08:25.506 "params": { 00:08:25.506 "trtype": "pcie", 00:08:25.506 "traddr": "0000:00:10.0", 00:08:25.506 "name": "Nvme0" 00:08:25.506 }, 00:08:25.506 "method": "bdev_nvme_attach_controller" 00:08:25.506 }, 00:08:25.506 { 00:08:25.506 "method": "bdev_wait_for_examine" 00:08:25.506 } 00:08:25.506 ] 00:08:25.506 } 00:08:25.507 ] 00:08:25.507 } 00:08:25.507 [2024-12-09 03:58:07.331005] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:25.507 [2024-12-09 03:58:07.331127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60252 ] 00:08:25.765 [2024-12-09 03:58:07.478518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.765 [2024-12-09 03:58:07.533072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.765 [2024-12-09 03:58:07.589367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.765  [2024-12-09T03:58:07.974Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:26.024 00:08:26.024 ************************************ 00:08:26.024 END TEST dd_rw 00:08:26.024 ************************************ 00:08:26.024 00:08:26.024 real 0m14.920s 00:08:26.024 user 0m10.807s 00:08:26.024 sys 0m5.836s 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:26.024 ************************************ 00:08:26.024 START TEST dd_rw_offset 00:08:26.024 ************************************ 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:26.024 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:26.335 03:58:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:26.335 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:26.335 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=2oo3rk2h1a7wuyvneafaj2sw8nire2kwrrql3fu58lem3ip0hcaq6xqjzyfcov8tuc2hjr8pyu13po5ddmpjfczgbf7gy9a2nrq72yb59pxu8jodxn693k8hu9scbj8j143dvdqb5hus2p2ghnysjkel5i7fuiv9l2n9k3zpfcs4hs0pwkmbq1unjsshli1wjbb4rrt38yqik05fned3af1dtby020yj0drzzqkdrqyrla4zene1gdlnvtx48bkn4dksgnyeimxg64zpiwrr34bbcxa8dkglhkpakzc96d8w64myykvevskupygzejtcky89oeyi1gemvt8vhs3wly29c1v9ulqaeyuj22ilpbb8mblp4iuic4nf9qxkcstmpilb417s7ruml41778934dex2701oiupt56v1z2twnedp4cdxak1u48kcuglli37u5jojhvmvq11vpoiarqydll98qn5coq2zx49vh7ky59cbvyrp0wv9emd56r6mbm5v7zb04vns17njs01p4jox5fp26786u7ehcrm7djzcwvk7zby7xbr3vg33vciyd10bzjosma88n8n8ta36no529d6vxdcotneadvjkrr3f51uub14p039ce3xlt8psextgzewuqjni86z75mx7h63zd66oqieigzy3cccsspqi5ll49uj3r224uru4qpdweo5wdkosfn5u67ogb4r0gfgi2nkcrz17agsgc80t2baoke3ousuhe2glbw3kfugtwgw5mtl53fus16e7spp4v29edychuqxbp9izwtuq8obeq0zd4rfmheghoo1wt3v39980ibtgax4tsno2huspw76e1r7z1y6q7zzyjh4m3970e5w5voc5ic9d9dg2twlbb6upw0ntognknxde80shinixzskw8ytc9dip0yizu30z0mz518acooml3mf1akgt22hceh026myud45wgish3pxiuvcgohl9yvy5qxqmyptug9qhtfpjvhhbvyydy3pvw6ga0bhu9vizgx0bqjboi6a51fuqdu17k5u0572hmswwch83ntk0hu98r7uz0m9n38m1y1fefjcvet71aj7ieknwz1u7sd775fuq3c3xx9wd5dwimnw1a4t3qeh5k0vsfzigugi523by0m69kgpns5dw2y8b0y58wlyhuaqdvbxqo92m8bfolk2shyesyi5necphvtyl9b3wmrc22268i67y77hpth9zqh4vo7qqoddgrmo8w4bigdzku16vbln0u5rukwf35wz13esx8ka4ex3uuku2sgmxifjc6m6bw659f1l19ccl2nqwwuqx90i4e29apqwdj8fy4t5s5259dri1pwd2jvxf780f1exvtgfll50pral8cww7c16cdcfmzvjmvkr4ttc8g34yfqfvx0vwg6ad1vdi5hync09t8qbwcffzd7gt0a7xswpyti7ofvx8b795pvdc3mw1edk4fywowuiyhqu95qyeudde9326vdot6whxho0el5dvcgxwf0phmb11rps1c6njv8qvk6qvdib1ppxy3qc4vccc0wklxbny9cz85633vnqmm58mcap7sww7l05rwb5gs1bcgdsm63tobfqs3a2li06nhwh385a8e3jnqt8ibq7qn3brkc01m6njyt51arxube3cbn3azyjcgut8kywpentzfwh1hmlsojv1q0gaxgrqf8y8ik8goro89lj2hmf9i052qcp62czuv8km1zh2nb26v75e14xccy4xirkxnc66g11r2qjfumvrgfsrkf782238j60scje0ww5zkq5ghyxxdplevl50rcqrgzydsezydr4igt81ret40yi031eu73dx0yf2ooqveqi1qvikjn7iyt2szv7t01fskzxfb18lzl23dokt4lzlbqh8d5ko6qvrv330ciu28hwzydp62jepo0k35q5wdyv1p5sbx77vuc410gcp9whytmgfmt1hjauksleucicmvu60a70ofx6dt1pgoq4tzkezur24ge3u8fi1aus9k76bllw30za54420zqw0whlgnxgma3q3l315p8ont2ec65532okuz4o6hprfh6hngx5i0isp2iwv33m8i159ma4qh5284ncvfejsszieulvllvau7gxmoettvfzq1cdjnkkc8usxb5nd0i94rdb7pkiocb17hyy06r50jlf6xo2bb7mhc76p5nhkqpa2ndzo0agdd2l6j1aqlbxerpn0grwhtrke7024k0awgq55odwahltfc4yic92hie1kul7oiti0zecomsigjd5f783ew7cbzr982ndhix612vbdbpw35uok2tcyrmayfejpbq2i56c8riyd2ppvn23jdaxu3tztklkzzcu5xv4pg0pkyo5yvevn2jojxcalok4t8xm67ozvg17uvmciztufs5do5rtz81hddmgj8exa7xqu6wgh55qy60ulcf5gb5hig8756uvvwfjvjzsidg9thyjqjtm64ihueg0efco5anatgh6qdo2w60a0u2ueb47dng61892rhxldxqz4cexldrnkcksprdvyjg8rda0au5964gtgyf40s7m257di4zjtzgc0px66ir9e95tlnbgmwqofj4p1n90u3wftqubycuee1li0rqbfjynnygmt9xfyjz6br5giyoxrirr27r0xdo20559wykmep9ixp5b849lexw7noyweo4aofzfgm1n4sd7ex2dggfocjteqmiyvxgzevpeittjluzsgh7wlpnxzkbjjdy9595bu3lq7k3sye1inqjlc0xsbhf2txx3006fwwfhepqnlutatakanuodq0i1nrztk02cf9d58v09qgby7duwys7k2klrpvov26g5qqq0keo3py0as319oc5dwhh5tvy6dvyda5uf9rc73i7kl570uqbxco60ye5iffrbqw5313nw8hyv3jdbkf3mb42vbo97bhqlcb2qsa8zm9cy84hbay0mlvcr5maoiw919ypiu5y89cbdl64yaik5xqj96m1rq4hqyn4hz7a4zni5u6qqo70g2x1x29hwpgj1wvy4e5yjvydi7jfi0w0gix6nkb63r3f4ci1loid65rb9uq31u31j6hinl4i386eoi67fqji37zjzzxt7m5rp4ky1mv6szfttlz3r4wnvpdb63tylq6qiv2u791u4hxzw692zk3344ifkw3mmzx9qtciycona9xgpnnh9ksdownracamr4g2voj4n72hsdg2e8unmw4ii28xnl1l9hfkevikxlmjk1ttc6p9gu1eo24z8ogaxc0zcl0d9mdi21r05toflr0inbblp56rm7hrfx2h0o1sqg4kgls6xno50rhm2a12n8n9j0nwrogpiecm737tbin4rjv5tcg1uejhijgbh6ysgkr63sfjr0s3m8yj2pheav91imkyc6o80uevgaw0t2wej5zo3m1m9vn3x5hfbno21q75koowuxa2o01dlwhf86cf5u0s54aggxcrowct3e7w2avnfb5o2klzqqozftp3ajiipzbfqzi5he66zb99seeo6oilkstk6od1hm9zswvsap9me9evlgczp3bcisxcaijfql0l99zb7tz9negi1cw1ne59icnyd3a8w1a3j8pmif6qu8h4o64n2kmj84fvj37pgvuz3v3rmm2i1i9il294w8p5mfnkz96xhamvnr9tc7r4omwqjy60fbgyrs31dzswtx5wdqwf99b80a5nd1xrkctakdrmpn8364jheur6gs62qg14j1i9xf04t3cbtrendj0x84x3ljeal3m1vtftrr8165zifg46zj685bawsb967f8bf8njuorf8hmve1ny42t8kfskevxrsk06iw2z507lwakgnw6squq30cmfijrjaadloeknof390pg67qf1d7twm1dcerwfp25ld5dfb8b0jz01r3tiwyuw61ukj4ji2dbioyld4tzrbuhcnq0j0x8qz8n2x43gous08nsv1lvr2t1q6o03ybjo7osz93kd1h8ool1adzhdp1v6me9jdg6e3dabbp15e 00:08:26.335 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:26.335 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:26.335 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:26.335 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:26.335 [2024-12-09 03:58:08.081480] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:26.335 [2024-12-09 03:58:08.081608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60277 ] 00:08:26.335 { 00:08:26.335 "subsystems": [ 00:08:26.335 { 00:08:26.335 "subsystem": "bdev", 00:08:26.335 "config": [ 00:08:26.335 { 00:08:26.335 "params": { 00:08:26.335 "trtype": "pcie", 00:08:26.335 "traddr": "0000:00:10.0", 00:08:26.335 "name": "Nvme0" 00:08:26.335 }, 00:08:26.335 "method": "bdev_nvme_attach_controller" 00:08:26.335 }, 00:08:26.335 { 00:08:26.335 "method": "bdev_wait_for_examine" 00:08:26.335 } 00:08:26.335 ] 00:08:26.335 } 00:08:26.335 ] 00:08:26.335 } 00:08:26.335 [2024-12-09 03:58:08.232578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.593 [2024-12-09 03:58:08.294256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.593 [2024-12-09 03:58:08.350540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.593  [2024-12-09T03:58:08.802Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:26.852 00:08:26.852 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:26.852 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:26.852 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:26.852 03:58:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:26.852 [2024-12-09 03:58:08.724123] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:26.852 [2024-12-09 03:58:08.724462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60296 ] 00:08:26.852 { 00:08:26.852 "subsystems": [ 00:08:26.852 { 00:08:26.852 "subsystem": "bdev", 00:08:26.852 "config": [ 00:08:26.852 { 00:08:26.852 "params": { 00:08:26.852 "trtype": "pcie", 00:08:26.852 "traddr": "0000:00:10.0", 00:08:26.852 "name": "Nvme0" 00:08:26.852 }, 00:08:26.852 "method": "bdev_nvme_attach_controller" 00:08:26.852 }, 00:08:26.852 { 00:08:26.852 "method": "bdev_wait_for_examine" 00:08:26.852 } 00:08:26.852 ] 00:08:26.852 } 00:08:26.852 ] 00:08:26.852 } 00:08:27.113 [2024-12-09 03:58:08.874418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.113 [2024-12-09 03:58:08.934906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.113 [2024-12-09 03:58:08.995156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.378  [2024-12-09T03:58:09.328Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:27.378 00:08:27.638 03:58:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 2oo3rk2h1a7wuyvneafaj2sw8nire2kwrrql3fu58lem3ip0hcaq6xqjzyfcov8tuc2hjr8pyu13po5ddmpjfczgbf7gy9a2nrq72yb59pxu8jodxn693k8hu9scbj8j143dvdqb5hus2p2ghnysjkel5i7fuiv9l2n9k3zpfcs4hs0pwkmbq1unjsshli1wjbb4rrt38yqik05fned3af1dtby020yj0drzzqkdrqyrla4zene1gdlnvtx48bkn4dksgnyeimxg64zpiwrr34bbcxa8dkglhkpakzc96d8w64myykvevskupygzejtcky89oeyi1gemvt8vhs3wly29c1v9ulqaeyuj22ilpbb8mblp4iuic4nf9qxkcstmpilb417s7ruml41778934dex2701oiupt56v1z2twnedp4cdxak1u48kcuglli37u5jojhvmvq11vpoiarqydll98qn5coq2zx49vh7ky59cbvyrp0wv9emd56r6mbm5v7zb04vns17njs01p4jox5fp26786u7ehcrm7djzcwvk7zby7xbr3vg33vciyd10bzjosma88n8n8ta36no529d6vxdcotneadvjkrr3f51uub14p039ce3xlt8psextgzewuqjni86z75mx7h63zd66oqieigzy3cccsspqi5ll49uj3r224uru4qpdweo5wdkosfn5u67ogb4r0gfgi2nkcrz17agsgc80t2baoke3ousuhe2glbw3kfugtwgw5mtl53fus16e7spp4v29edychuqxbp9izwtuq8obeq0zd4rfmheghoo1wt3v39980ibtgax4tsno2huspw76e1r7z1y6q7zzyjh4m3970e5w5voc5ic9d9dg2twlbb6upw0ntognknxde80shinixzskw8ytc9dip0yizu30z0mz518acooml3mf1akgt22hceh026myud45wgish3pxiuvcgohl9yvy5qxqmyptug9qhtfpjvhhbvyydy3pvw6ga0bhu9vizgx0bqjboi6a51fuqdu17k5u0572hmswwch83ntk0hu98r7uz0m9n38m1y1fefjcvet71aj7ieknwz1u7sd775fuq3c3xx9wd5dwimnw1a4t3qeh5k0vsfzigugi523by0m69kgpns5dw2y8b0y58wlyhuaqdvbxqo92m8bfolk2shyesyi5necphvtyl9b3wmrc22268i67y77hpth9zqh4vo7qqoddgrmo8w4bigdzku16vbln0u5rukwf35wz13esx8ka4ex3uuku2sgmxifjc6m6bw659f1l19ccl2nqwwuqx90i4e29apqwdj8fy4t5s5259dri1pwd2jvxf780f1exvtgfll50pral8cww7c16cdcfmzvjmvkr4ttc8g34yfqfvx0vwg6ad1vdi5hync09t8qbwcffzd7gt0a7xswpyti7ofvx8b795pvdc3mw1edk4fywowuiyhqu95qyeudde9326vdot6whxho0el5dvcgxwf0phmb11rps1c6njv8qvk6qvdib1ppxy3qc4vccc0wklxbny9cz85633vnqmm58mcap7sww7l05rwb5gs1bcgdsm63tobfqs3a2li06nhwh385a8e3jnqt8ibq7qn3brkc01m6njyt51arxube3cbn3azyjcgut8kywpentzfwh1hmlsojv1q0gaxgrqf8y8ik8goro89lj2hmf9i052qcp62czuv8km1zh2nb26v75e14xccy4xirkxnc66g11r2qjfumvrgfsrkf782238j60scje0ww5zkq5ghyxxdplevl50rcqrgzydsezydr4igt81ret40yi031eu73dx0yf2ooqveqi1qvikjn7iyt2szv7t01fskzxfb18lzl23dokt4lzlbqh8d5ko6qvrv330ciu28hwzydp62jepo0k35q5wdyv1p5sbx77vuc410gcp9whytmgfmt1hjauksleucicmvu60a70ofx6dt1pgoq4tzkezur24ge3u8fi1aus9k76bllw30za54420zqw0whlgnxgma3q3l315p8ont2ec65532okuz4o6hprfh6hngx5i0isp2iwv33m8i159ma4qh5284ncvfejsszieulvllvau7gxmoettvfzq1cdjnkkc8usxb5nd0i94rdb7pkiocb17hyy06r50jlf6xo2bb7mhc76p5nhkqpa2ndzo0agdd2l6j1aqlbxerpn0grwhtrke7024k0awgq55odwahltfc4yic92hie1kul7oiti0zecomsigjd5f783ew7cbzr982ndhix612vbdbpw35uok2tcyrmayfejpbq2i56c8riyd2ppvn23jdaxu3tztklkzzcu5xv4pg0pkyo5yvevn2jojxcalok4t8xm67ozvg17uvmciztufs5do5rtz81hddmgj8exa7xqu6wgh55qy60ulcf5gb5hig8756uvvwfjvjzsidg9thyjqjtm64ihueg0efco5anatgh6qdo2w60a0u2ueb47dng61892rhxldxqz4cexldrnkcksprdvyjg8rda0au5964gtgyf40s7m257di4zjtzgc0px66ir9e95tlnbgmwqofj4p1n90u3wftqubycuee1li0rqbfjynnygmt9xfyjz6br5giyoxrirr27r0xdo20559wykmep9ixp5b849lexw7noyweo4aofzfgm1n4sd7ex2dggfocjteqmiyvxgzevpeittjluzsgh7wlpnxzkbjjdy9595bu3lq7k3sye1inqjlc0xsbhf2txx3006fwwfhepqnlutatakanuodq0i1nrztk02cf9d58v09qgby7duwys7k2klrpvov26g5qqq0keo3py0as319oc5dwhh5tvy6dvyda5uf9rc73i7kl570uqbxco60ye5iffrbqw5313nw8hyv3jdbkf3mb42vbo97bhqlcb2qsa8zm9cy84hbay0mlvcr5maoiw919ypiu5y89cbdl64yaik5xqj96m1rq4hqyn4hz7a4zni5u6qqo70g2x1x29hwpgj1wvy4e5yjvydi7jfi0w0gix6nkb63r3f4ci1loid65rb9uq31u31j6hinl4i386eoi67fqji37zjzzxt7m5rp4ky1mv6szfttlz3r4wnvpdb63tylq6qiv2u791u4hxzw692zk3344ifkw3mmzx9qtciycona9xgpnnh9ksdownracamr4g2voj4n72hsdg2e8unmw4ii28xnl1l9hfkevikxlmjk1ttc6p9gu1eo24z8ogaxc0zcl0d9mdi21r05toflr0inbblp56rm7hrfx2h0o1sqg4kgls6xno50rhm2a12n8n9j0nwrogpiecm737tbin4rjv5tcg1uejhijgbh6ysgkr63sfjr0s3m8yj2pheav91imkyc6o80uevgaw0t2wej5zo3m1m9vn3x5hfbno21q75koowuxa2o01dlwhf86cf5u0s54aggxcrowct3e7w2avnfb5o2klzqqozftp3ajiipzbfqzi5he66zb99seeo6oilkstk6od1hm9zswvsap9me9evlgczp3bcisxcaijfql0l99zb7tz9negi1cw1ne59icnyd3a8w1a3j8pmif6qu8h4o64n2kmj84fvj37pgvuz3v3rmm2i1i9il294w8p5mfnkz96xhamvnr9tc7r4omwqjy60fbgyrs31dzswtx5wdqwf99b80a5nd1xrkctakdrmpn8364jheur6gs62qg14j1i9xf04t3cbtrendj0x84x3ljeal3m1vtftrr8165zifg46zj685bawsb967f8bf8njuorf8hmve1ny42t8kfskevxrsk06iw2z507lwakgnw6squq30cmfijrjaadloeknof390pg67qf1d7twm1dcerwfp25ld5dfb8b0jz01r3tiwyuw61ukj4ji2dbioyld4tzrbuhcnq0j0x8qz8n2x43gous08nsv1lvr2t1q6o03ybjo7osz93kd1h8ool1adzhdp1v6me9jdg6e3dabbp15e == \2\o\o\3\r\k\2\h\1\a\7\w\u\y\v\n\e\a\f\a\j\2\s\w\8\n\i\r\e\2\k\w\r\r\q\l\3\f\u\5\8\l\e\m\3\i\p\0\h\c\a\q\6\x\q\j\z\y\f\c\o\v\8\t\u\c\2\h\j\r\8\p\y\u\1\3\p\o\5\d\d\m\p\j\f\c\z\g\b\f\7\g\y\9\a\2\n\r\q\7\2\y\b\5\9\p\x\u\8\j\o\d\x\n\6\9\3\k\8\h\u\9\s\c\b\j\8\j\1\4\3\d\v\d\q\b\5\h\u\s\2\p\2\g\h\n\y\s\j\k\e\l\5\i\7\f\u\i\v\9\l\2\n\9\k\3\z\p\f\c\s\4\h\s\0\p\w\k\m\b\q\1\u\n\j\s\s\h\l\i\1\w\j\b\b\4\r\r\t\3\8\y\q\i\k\0\5\f\n\e\d\3\a\f\1\d\t\b\y\0\2\0\y\j\0\d\r\z\z\q\k\d\r\q\y\r\l\a\4\z\e\n\e\1\g\d\l\n\v\t\x\4\8\b\k\n\4\d\k\s\g\n\y\e\i\m\x\g\6\4\z\p\i\w\r\r\3\4\b\b\c\x\a\8\d\k\g\l\h\k\p\a\k\z\c\9\6\d\8\w\6\4\m\y\y\k\v\e\v\s\k\u\p\y\g\z\e\j\t\c\k\y\8\9\o\e\y\i\1\g\e\m\v\t\8\v\h\s\3\w\l\y\2\9\c\1\v\9\u\l\q\a\e\y\u\j\2\2\i\l\p\b\b\8\m\b\l\p\4\i\u\i\c\4\n\f\9\q\x\k\c\s\t\m\p\i\l\b\4\1\7\s\7\r\u\m\l\4\1\7\7\8\9\3\4\d\e\x\2\7\0\1\o\i\u\p\t\5\6\v\1\z\2\t\w\n\e\d\p\4\c\d\x\a\k\1\u\4\8\k\c\u\g\l\l\i\3\7\u\5\j\o\j\h\v\m\v\q\1\1\v\p\o\i\a\r\q\y\d\l\l\9\8\q\n\5\c\o\q\2\z\x\4\9\v\h\7\k\y\5\9\c\b\v\y\r\p\0\w\v\9\e\m\d\5\6\r\6\m\b\m\5\v\7\z\b\0\4\v\n\s\1\7\n\j\s\0\1\p\4\j\o\x\5\f\p\2\6\7\8\6\u\7\e\h\c\r\m\7\d\j\z\c\w\v\k\7\z\b\y\7\x\b\r\3\v\g\3\3\v\c\i\y\d\1\0\b\z\j\o\s\m\a\8\8\n\8\n\8\t\a\3\6\n\o\5\2\9\d\6\v\x\d\c\o\t\n\e\a\d\v\j\k\r\r\3\f\5\1\u\u\b\1\4\p\0\3\9\c\e\3\x\l\t\8\p\s\e\x\t\g\z\e\w\u\q\j\n\i\8\6\z\7\5\m\x\7\h\6\3\z\d\6\6\o\q\i\e\i\g\z\y\3\c\c\c\s\s\p\q\i\5\l\l\4\9\u\j\3\r\2\2\4\u\r\u\4\q\p\d\w\e\o\5\w\d\k\o\s\f\n\5\u\6\7\o\g\b\4\r\0\g\f\g\i\2\n\k\c\r\z\1\7\a\g\s\g\c\8\0\t\2\b\a\o\k\e\3\o\u\s\u\h\e\2\g\l\b\w\3\k\f\u\g\t\w\g\w\5\m\t\l\5\3\f\u\s\1\6\e\7\s\p\p\4\v\2\9\e\d\y\c\h\u\q\x\b\p\9\i\z\w\t\u\q\8\o\b\e\q\0\z\d\4\r\f\m\h\e\g\h\o\o\1\w\t\3\v\3\9\9\8\0\i\b\t\g\a\x\4\t\s\n\o\2\h\u\s\p\w\7\6\e\1\r\7\z\1\y\6\q\7\z\z\y\j\h\4\m\3\9\7\0\e\5\w\5\v\o\c\5\i\c\9\d\9\d\g\2\t\w\l\b\b\6\u\p\w\0\n\t\o\g\n\k\n\x\d\e\8\0\s\h\i\n\i\x\z\s\k\w\8\y\t\c\9\d\i\p\0\y\i\z\u\3\0\z\0\m\z\5\1\8\a\c\o\o\m\l\3\m\f\1\a\k\g\t\2\2\h\c\e\h\0\2\6\m\y\u\d\4\5\w\g\i\s\h\3\p\x\i\u\v\c\g\o\h\l\9\y\v\y\5\q\x\q\m\y\p\t\u\g\9\q\h\t\f\p\j\v\h\h\b\v\y\y\d\y\3\p\v\w\6\g\a\0\b\h\u\9\v\i\z\g\x\0\b\q\j\b\o\i\6\a\5\1\f\u\q\d\u\1\7\k\5\u\0\5\7\2\h\m\s\w\w\c\h\8\3\n\t\k\0\h\u\9\8\r\7\u\z\0\m\9\n\3\8\m\1\y\1\f\e\f\j\c\v\e\t\7\1\a\j\7\i\e\k\n\w\z\1\u\7\s\d\7\7\5\f\u\q\3\c\3\x\x\9\w\d\5\d\w\i\m\n\w\1\a\4\t\3\q\e\h\5\k\0\v\s\f\z\i\g\u\g\i\5\2\3\b\y\0\m\6\9\k\g\p\n\s\5\d\w\2\y\8\b\0\y\5\8\w\l\y\h\u\a\q\d\v\b\x\q\o\9\2\m\8\b\f\o\l\k\2\s\h\y\e\s\y\i\5\n\e\c\p\h\v\t\y\l\9\b\3\w\m\r\c\2\2\2\6\8\i\6\7\y\7\7\h\p\t\h\9\z\q\h\4\v\o\7\q\q\o\d\d\g\r\m\o\8\w\4\b\i\g\d\z\k\u\1\6\v\b\l\n\0\u\5\r\u\k\w\f\3\5\w\z\1\3\e\s\x\8\k\a\4\e\x\3\u\u\k\u\2\s\g\m\x\i\f\j\c\6\m\6\b\w\6\5\9\f\1\l\1\9\c\c\l\2\n\q\w\w\u\q\x\9\0\i\4\e\2\9\a\p\q\w\d\j\8\f\y\4\t\5\s\5\2\5\9\d\r\i\1\p\w\d\2\j\v\x\f\7\8\0\f\1\e\x\v\t\g\f\l\l\5\0\p\r\a\l\8\c\w\w\7\c\1\6\c\d\c\f\m\z\v\j\m\v\k\r\4\t\t\c\8\g\3\4\y\f\q\f\v\x\0\v\w\g\6\a\d\1\v\d\i\5\h\y\n\c\0\9\t\8\q\b\w\c\f\f\z\d\7\g\t\0\a\7\x\s\w\p\y\t\i\7\o\f\v\x\8\b\7\9\5\p\v\d\c\3\m\w\1\e\d\k\4\f\y\w\o\w\u\i\y\h\q\u\9\5\q\y\e\u\d\d\e\9\3\2\6\v\d\o\t\6\w\h\x\h\o\0\e\l\5\d\v\c\g\x\w\f\0\p\h\m\b\1\1\r\p\s\1\c\6\n\j\v\8\q\v\k\6\q\v\d\i\b\1\p\p\x\y\3\q\c\4\v\c\c\c\0\w\k\l\x\b\n\y\9\c\z\8\5\6\3\3\v\n\q\m\m\5\8\m\c\a\p\7\s\w\w\7\l\0\5\r\w\b\5\g\s\1\b\c\g\d\s\m\6\3\t\o\b\f\q\s\3\a\2\l\i\0\6\n\h\w\h\3\8\5\a\8\e\3\j\n\q\t\8\i\b\q\7\q\n\3\b\r\k\c\0\1\m\6\n\j\y\t\5\1\a\r\x\u\b\e\3\c\b\n\3\a\z\y\j\c\g\u\t\8\k\y\w\p\e\n\t\z\f\w\h\1\h\m\l\s\o\j\v\1\q\0\g\a\x\g\r\q\f\8\y\8\i\k\8\g\o\r\o\8\9\l\j\2\h\m\f\9\i\0\5\2\q\c\p\6\2\c\z\u\v\8\k\m\1\z\h\2\n\b\2\6\v\7\5\e\1\4\x\c\c\y\4\x\i\r\k\x\n\c\6\6\g\1\1\r\2\q\j\f\u\m\v\r\g\f\s\r\k\f\7\8\2\2\3\8\j\6\0\s\c\j\e\0\w\w\5\z\k\q\5\g\h\y\x\x\d\p\l\e\v\l\5\0\r\c\q\r\g\z\y\d\s\e\z\y\d\r\4\i\g\t\8\1\r\e\t\4\0\y\i\0\3\1\e\u\7\3\d\x\0\y\f\2\o\o\q\v\e\q\i\1\q\v\i\k\j\n\7\i\y\t\2\s\z\v\7\t\0\1\f\s\k\z\x\f\b\1\8\l\z\l\2\3\d\o\k\t\4\l\z\l\b\q\h\8\d\5\k\o\6\q\v\r\v\3\3\0\c\i\u\2\8\h\w\z\y\d\p\6\2\j\e\p\o\0\k\3\5\q\5\w\d\y\v\1\p\5\s\b\x\7\7\v\u\c\4\1\0\g\c\p\9\w\h\y\t\m\g\f\m\t\1\h\j\a\u\k\s\l\e\u\c\i\c\m\v\u\6\0\a\7\0\o\f\x\6\d\t\1\p\g\o\q\4\t\z\k\e\z\u\r\2\4\g\e\3\u\8\f\i\1\a\u\s\9\k\7\6\b\l\l\w\3\0\z\a\5\4\4\2\0\z\q\w\0\w\h\l\g\n\x\g\m\a\3\q\3\l\3\1\5\p\8\o\n\t\2\e\c\6\5\5\3\2\o\k\u\z\4\o\6\h\p\r\f\h\6\h\n\g\x\5\i\0\i\s\p\2\i\w\v\3\3\m\8\i\1\5\9\m\a\4\q\h\5\2\8\4\n\c\v\f\e\j\s\s\z\i\e\u\l\v\l\l\v\a\u\7\g\x\m\o\e\t\t\v\f\z\q\1\c\d\j\n\k\k\c\8\u\s\x\b\5\n\d\0\i\9\4\r\d\b\7\p\k\i\o\c\b\1\7\h\y\y\0\6\r\5\0\j\l\f\6\x\o\2\b\b\7\m\h\c\7\6\p\5\n\h\k\q\p\a\2\n\d\z\o\0\a\g\d\d\2\l\6\j\1\a\q\l\b\x\e\r\p\n\0\g\r\w\h\t\r\k\e\7\0\2\4\k\0\a\w\g\q\5\5\o\d\w\a\h\l\t\f\c\4\y\i\c\9\2\h\i\e\1\k\u\l\7\o\i\t\i\0\z\e\c\o\m\s\i\g\j\d\5\f\7\8\3\e\w\7\c\b\z\r\9\8\2\n\d\h\i\x\6\1\2\v\b\d\b\p\w\3\5\u\o\k\2\t\c\y\r\m\a\y\f\e\j\p\b\q\2\i\5\6\c\8\r\i\y\d\2\p\p\v\n\2\3\j\d\a\x\u\3\t\z\t\k\l\k\z\z\c\u\5\x\v\4\p\g\0\p\k\y\o\5\y\v\e\v\n\2\j\o\j\x\c\a\l\o\k\4\t\8\x\m\6\7\o\z\v\g\1\7\u\v\m\c\i\z\t\u\f\s\5\d\o\5\r\t\z\8\1\h\d\d\m\g\j\8\e\x\a\7\x\q\u\6\w\g\h\5\5\q\y\6\0\u\l\c\f\5\g\b\5\h\i\g\8\7\5\6\u\v\v\w\f\j\v\j\z\s\i\d\g\9\t\h\y\j\q\j\t\m\6\4\i\h\u\e\g\0\e\f\c\o\5\a\n\a\t\g\h\6\q\d\o\2\w\6\0\a\0\u\2\u\e\b\4\7\d\n\g\6\1\8\9\2\r\h\x\l\d\x\q\z\4\c\e\x\l\d\r\n\k\c\k\s\p\r\d\v\y\j\g\8\r\d\a\0\a\u\5\9\6\4\g\t\g\y\f\4\0\s\7\m\2\5\7\d\i\4\z\j\t\z\g\c\0\p\x\6\6\i\r\9\e\9\5\t\l\n\b\g\m\w\q\o\f\j\4\p\1\n\9\0\u\3\w\f\t\q\u\b\y\c\u\e\e\1\l\i\0\r\q\b\f\j\y\n\n\y\g\m\t\9\x\f\y\j\z\6\b\r\5\g\i\y\o\x\r\i\r\r\2\7\r\0\x\d\o\2\0\5\5\9\w\y\k\m\e\p\9\i\x\p\5\b\8\4\9\l\e\x\w\7\n\o\y\w\e\o\4\a\o\f\z\f\g\m\1\n\4\s\d\7\e\x\2\d\g\g\f\o\c\j\t\e\q\m\i\y\v\x\g\z\e\v\p\e\i\t\t\j\l\u\z\s\g\h\7\w\l\p\n\x\z\k\b\j\j\d\y\9\5\9\5\b\u\3\l\q\7\k\3\s\y\e\1\i\n\q\j\l\c\0\x\s\b\h\f\2\t\x\x\3\0\0\6\f\w\w\f\h\e\p\q\n\l\u\t\a\t\a\k\a\n\u\o\d\q\0\i\1\n\r\z\t\k\0\2\c\f\9\d\5\8\v\0\9\q\g\b\y\7\d\u\w\y\s\7\k\2\k\l\r\p\v\o\v\2\6\g\5\q\q\q\0\k\e\o\3\p\y\0\a\s\3\1\9\o\c\5\d\w\h\h\5\t\v\y\6\d\v\y\d\a\5\u\f\9\r\c\7\3\i\7\k\l\5\7\0\u\q\b\x\c\o\6\0\y\e\5\i\f\f\r\b\q\w\5\3\1\3\n\w\8\h\y\v\3\j\d\b\k\f\3\m\b\4\2\v\b\o\9\7\b\h\q\l\c\b\2\q\s\a\8\z\m\9\c\y\8\4\h\b\a\y\0\m\l\v\c\r\5\m\a\o\i\w\9\1\9\y\p\i\u\5\y\8\9\c\b\d\l\6\4\y\a\i\k\5\x\q\j\9\6\m\1\r\q\4\h\q\y\n\4\h\z\7\a\4\z\n\i\5\u\6\q\q\o\7\0\g\2\x\1\x\2\9\h\w\p\g\j\1\w\v\y\4\e\5\y\j\v\y\d\i\7\j\f\i\0\w\0\g\i\x\6\n\k\b\6\3\r\3\f\4\c\i\1\l\o\i\d\6\5\r\b\9\u\q\3\1\u\3\1\j\6\h\i\n\l\4\i\3\8\6\e\o\i\6\7\f\q\j\i\3\7\z\j\z\z\x\t\7\m\5\r\p\4\k\y\1\m\v\6\s\z\f\t\t\l\z\3\r\4\w\n\v\p\d\b\6\3\t\y\l\q\6\q\i\v\2\u\7\9\1\u\4\h\x\z\w\6\9\2\z\k\3\3\4\4\i\f\k\w\3\m\m\z\x\9\q\t\c\i\y\c\o\n\a\9\x\g\p\n\n\h\9\k\s\d\o\w\n\r\a\c\a\m\r\4\g\2\v\o\j\4\n\7\2\h\s\d\g\2\e\8\u\n\m\w\4\i\i\2\8\x\n\l\1\l\9\h\f\k\e\v\i\k\x\l\m\j\k\1\t\t\c\6\p\9\g\u\1\e\o\2\4\z\8\o\g\a\x\c\0\z\c\l\0\d\9\m\d\i\2\1\r\0\5\t\o\f\l\r\0\i\n\b\b\l\p\5\6\r\m\7\h\r\f\x\2\h\0\o\1\s\q\g\4\k\g\l\s\6\x\n\o\5\0\r\h\m\2\a\1\2\n\8\n\9\j\0\n\w\r\o\g\p\i\e\c\m\7\3\7\t\b\i\n\4\r\j\v\5\t\c\g\1\u\e\j\h\i\j\g\b\h\6\y\s\g\k\r\6\3\s\f\j\r\0\s\3\m\8\y\j\2\p\h\e\a\v\9\1\i\m\k\y\c\6\o\8\0\u\e\v\g\a\w\0\t\2\w\e\j\5\z\o\3\m\1\m\9\v\n\3\x\5\h\f\b\n\o\2\1\q\7\5\k\o\o\w\u\x\a\2\o\0\1\d\l\w\h\f\8\6\c\f\5\u\0\s\5\4\a\g\g\x\c\r\o\w\c\t\3\e\7\w\2\a\v\n\f\b\5\o\2\k\l\z\q\q\o\z\f\t\p\3\a\j\i\i\p\z\b\f\q\z\i\5\h\e\6\6\z\b\9\9\s\e\e\o\6\o\i\l\k\s\t\k\6\o\d\1\h\m\9\z\s\w\v\s\a\p\9\m\e\9\e\v\l\g\c\z\p\3\b\c\i\s\x\c\a\i\j\f\q\l\0\l\9\9\z\b\7\t\z\9\n\e\g\i\1\c\w\1\n\e\5\9\i\c\n\y\d\3\a\8\w\1\a\3\j\8\p\m\i\f\6\q\u\8\h\4\o\6\4\n\2\k\m\j\8\4\f\v\j\3\7\p\g\v\u\z\3\v\3\r\m\m\2\i\1\i\9\i\l\2\9\4\w\8\p\5\m\f\n\k\z\9\6\x\h\a\m\v\n\r\9\t\c\7\r\4\o\m\w\q\j\y\6\0\f\b\g\y\r\s\3\1\d\z\s\w\t\x\5\w\d\q\w\f\9\9\b\8\0\a\5\n\d\1\x\r\k\c\t\a\k\d\r\m\p\n\8\3\6\4\j\h\e\u\r\6\g\s\6\2\q\g\1\4\j\1\i\9\x\f\0\4\t\3\c\b\t\r\e\n\d\j\0\x\8\4\x\3\l\j\e\a\l\3\m\1\v\t\f\t\r\r\8\1\6\5\z\i\f\g\4\6\z\j\6\8\5\b\a\w\s\b\9\6\7\f\8\b\f\8\n\j\u\o\r\f\8\h\m\v\e\1\n\y\4\2\t\8\k\f\s\k\e\v\x\r\s\k\0\6\i\w\2\z\5\0\7\l\w\a\k\g\n\w\6\s\q\u\q\3\0\c\m\f\i\j\r\j\a\a\d\l\o\e\k\n\o\f\3\9\0\p\g\6\7\q\f\1\d\7\t\w\m\1\d\c\e\r\w\f\p\2\5\l\d\5\d\f\b\8\b\0\j\z\0\1\r\3\t\i\w\y\u\w\6\1\u\k\j\4\j\i\2\d\b\i\o\y\l\d\4\t\z\r\b\u\h\c\n\q\0\j\0\x\8\q\z\8\n\2\x\4\3\g\o\u\s\0\8\n\s\v\1\l\v\r\2\t\1\q\6\o\0\3\y\b\j\o\7\o\s\z\9\3\k\d\1\h\8\o\o\l\1\a\d\z\h\d\p\1\v\6\m\e\9\j\d\g\6\e\3\d\a\b\b\p\1\5\e ]] 00:08:27.639 00:08:27.639 real 0m1.363s 00:08:27.639 user 0m0.929s 00:08:27.639 sys 0m0.664s 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.639 ************************************ 00:08:27.639 END TEST dd_rw_offset 00:08:27.639 ************************************ 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:27.639 03:58:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:27.639 { 00:08:27.639 "subsystems": [ 00:08:27.639 { 00:08:27.639 "subsystem": "bdev", 00:08:27.639 "config": [ 00:08:27.639 { 00:08:27.639 "params": { 00:08:27.639 "trtype": "pcie", 00:08:27.639 "traddr": "0000:00:10.0", 00:08:27.639 "name": "Nvme0" 00:08:27.639 }, 00:08:27.639 "method": "bdev_nvme_attach_controller" 00:08:27.639 }, 00:08:27.639 { 00:08:27.639 "method": "bdev_wait_for_examine" 00:08:27.639 } 00:08:27.639 ] 00:08:27.639 } 00:08:27.639 ] 00:08:27.639 } 00:08:27.639 [2024-12-09 03:58:09.441922] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:27.639 [2024-12-09 03:58:09.442022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60331 ] 00:08:27.898 [2024-12-09 03:58:09.593144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.898 [2024-12-09 03:58:09.659106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.898 [2024-12-09 03:58:09.720903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.898  [2024-12-09T03:58:10.106Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:28.156 00:08:28.156 03:58:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.156 ************************************ 00:08:28.156 END TEST spdk_dd_basic_rw 00:08:28.156 ************************************ 00:08:28.156 00:08:28.156 real 0m18.208s 00:08:28.156 user 0m12.887s 00:08:28.156 sys 0m7.221s 00:08:28.156 03:58:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.156 03:58:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:28.156 03:58:10 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:28.156 03:58:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.156 03:58:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.156 03:58:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:28.156 ************************************ 00:08:28.156 START TEST spdk_dd_posix 00:08:28.156 ************************************ 00:08:28.156 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:28.415 * Looking for test storage... 00:08:28.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.415 --rc genhtml_branch_coverage=1 00:08:28.415 --rc genhtml_function_coverage=1 00:08:28.415 --rc genhtml_legend=1 00:08:28.415 --rc geninfo_all_blocks=1 00:08:28.415 --rc geninfo_unexecuted_blocks=1 00:08:28.415 00:08:28.415 ' 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.415 --rc genhtml_branch_coverage=1 00:08:28.415 --rc genhtml_function_coverage=1 00:08:28.415 --rc genhtml_legend=1 00:08:28.415 --rc geninfo_all_blocks=1 00:08:28.415 --rc geninfo_unexecuted_blocks=1 00:08:28.415 00:08:28.415 ' 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.415 --rc genhtml_branch_coverage=1 00:08:28.415 --rc genhtml_function_coverage=1 00:08:28.415 --rc genhtml_legend=1 00:08:28.415 --rc geninfo_all_blocks=1 00:08:28.415 --rc geninfo_unexecuted_blocks=1 00:08:28.415 00:08:28.415 ' 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.415 --rc genhtml_branch_coverage=1 00:08:28.415 --rc genhtml_function_coverage=1 00:08:28.415 --rc genhtml_legend=1 00:08:28.415 --rc geninfo_all_blocks=1 00:08:28.415 --rc geninfo_unexecuted_blocks=1 00:08:28.415 00:08:28.415 ' 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.415 03:58:10 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:28.416 * First test run, liburing in use 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:28.416 ************************************ 00:08:28.416 START TEST dd_flag_append 00:08:28.416 ************************************ 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ku23dunp30eyvprbax1uspu095twhdyy 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=s7hiwlk2x284g600qhj9vs47twqmnk3b 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ku23dunp30eyvprbax1uspu095twhdyy 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s s7hiwlk2x284g600qhj9vs47twqmnk3b 00:08:28.416 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:28.674 [2024-12-09 03:58:10.394270] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:28.674 [2024-12-09 03:58:10.394417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60400 ] 00:08:28.674 [2024-12-09 03:58:10.542836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.674 [2024-12-09 03:58:10.602725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.933 [2024-12-09 03:58:10.662337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.933  [2024-12-09T03:58:11.141Z] Copying: 32/32 [B] (average 31 kBps) 00:08:29.191 00:08:29.191 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ s7hiwlk2x284g600qhj9vs47twqmnk3bku23dunp30eyvprbax1uspu095twhdyy == \s\7\h\i\w\l\k\2\x\2\8\4\g\6\0\0\q\h\j\9\v\s\4\7\t\w\q\m\n\k\3\b\k\u\2\3\d\u\n\p\3\0\e\y\v\p\r\b\a\x\1\u\s\p\u\0\9\5\t\w\h\d\y\y ]] 00:08:29.191 00:08:29.191 real 0m0.584s 00:08:29.191 user 0m0.320s 00:08:29.191 sys 0m0.297s 00:08:29.191 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.191 ************************************ 00:08:29.191 END TEST dd_flag_append 00:08:29.191 ************************************ 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:29.192 ************************************ 00:08:29.192 START TEST dd_flag_directory 00:08:29.192 ************************************ 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.192 03:58:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.192 [2024-12-09 03:58:11.034534] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:29.192 [2024-12-09 03:58:11.034662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60426 ] 00:08:29.450 [2024-12-09 03:58:11.183518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.450 [2024-12-09 03:58:11.251938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.450 [2024-12-09 03:58:11.312928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.450 [2024-12-09 03:58:11.356185] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:29.450 [2024-12-09 03:58:11.356286] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:29.450 [2024-12-09 03:58:11.356302] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.708 [2024-12-09 03:58:11.486174] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.708 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.709 03:58:11 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:29.709 [2024-12-09 03:58:11.620185] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:29.709 [2024-12-09 03:58:11.620289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60441 ] 00:08:29.967 [2024-12-09 03:58:11.768860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.967 [2024-12-09 03:58:11.833606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.967 [2024-12-09 03:58:11.891542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.252 [2024-12-09 03:58:11.932010] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:30.252 [2024-12-09 03:58:11.932069] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:30.252 [2024-12-09 03:58:11.932101] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.252 [2024-12-09 03:58:12.059673] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.252 00:08:30.252 real 0m1.161s 00:08:30.252 user 0m0.634s 00:08:30.252 sys 0m0.315s 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.252 ************************************ 00:08:30.252 END TEST dd_flag_directory 00:08:30.252 ************************************ 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:30.252 ************************************ 00:08:30.252 START TEST dd_flag_nofollow 00:08:30.252 ************************************ 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.252 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.509 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.509 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.509 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.509 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.509 [2024-12-09 03:58:12.261474] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:30.509 [2024-12-09 03:58:12.261600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60470 ] 00:08:30.509 [2024-12-09 03:58:12.415567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.767 [2024-12-09 03:58:12.488596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.767 [2024-12-09 03:58:12.549844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.767 [2024-12-09 03:58:12.597318] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:30.767 [2024-12-09 03:58:12.597418] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:30.767 [2024-12-09 03:58:12.597448] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.024 [2024-12-09 03:58:12.736606] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.024 03:58:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:31.024 [2024-12-09 03:58:12.872581] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:31.024 [2024-12-09 03:58:12.872696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60479 ] 00:08:31.281 [2024-12-09 03:58:13.022033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.281 [2024-12-09 03:58:13.089674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.281 [2024-12-09 03:58:13.147029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.281 [2024-12-09 03:58:13.189760] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:31.281 [2024-12-09 03:58:13.190147] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:31.281 [2024-12-09 03:58:13.190196] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.539 [2024-12-09 03:58:13.317527] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:31.539 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.539 [2024-12-09 03:58:13.452526] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:31.539 [2024-12-09 03:58:13.452828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60487 ] 00:08:31.797 [2024-12-09 03:58:13.606010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.797 [2024-12-09 03:58:13.678499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.797 [2024-12-09 03:58:13.739978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.055  [2024-12-09T03:58:14.005Z] Copying: 512/512 [B] (average 500 kBps) 00:08:32.055 00:08:32.055 ************************************ 00:08:32.055 END TEST dd_flag_nofollow 00:08:32.055 ************************************ 00:08:32.055 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ xlypnyn5c0k5qkyefo676nxdlobwi538d4eq1uuuo0it9s7g16plv97rql7m7m82hskr7ku8p2qj5ovag8s7uokqpel4a20ok2z6hozygu2fk9qo200uyxz84p2c5av2zy6fjrci4ogmfvubiou7chsm96qa9zdbvr1un7aufb7ddf9k4gl13xrb19p3zdz9ye9rthsbmu5iur36imrghm37nawmmex4dtce5m09hjja5y059cggabi6gwkop3mmgn12yefaukbwbefyt99u8orxrp5iwsc81ooa5dzpwt6jurqubolkk94o9jmo5o7g9d2gi6kcffo4g2zs7pmey3pnrk8b94jaytg67jzkth365anx2gw7s8xiu3tt1xauxrtvhbj5f7by0drb96p6ahcdxuxat5vi80t2viqyf6obz57c9txlnqvx0okm6iesaxdh2kgqk3kdolobifjqou6t2n68m7ztvvh5g1evxr30ixhyys5v2pxmqqqp4123 == \x\l\y\p\n\y\n\5\c\0\k\5\q\k\y\e\f\o\6\7\6\n\x\d\l\o\b\w\i\5\3\8\d\4\e\q\1\u\u\u\o\0\i\t\9\s\7\g\1\6\p\l\v\9\7\r\q\l\7\m\7\m\8\2\h\s\k\r\7\k\u\8\p\2\q\j\5\o\v\a\g\8\s\7\u\o\k\q\p\e\l\4\a\2\0\o\k\2\z\6\h\o\z\y\g\u\2\f\k\9\q\o\2\0\0\u\y\x\z\8\4\p\2\c\5\a\v\2\z\y\6\f\j\r\c\i\4\o\g\m\f\v\u\b\i\o\u\7\c\h\s\m\9\6\q\a\9\z\d\b\v\r\1\u\n\7\a\u\f\b\7\d\d\f\9\k\4\g\l\1\3\x\r\b\1\9\p\3\z\d\z\9\y\e\9\r\t\h\s\b\m\u\5\i\u\r\3\6\i\m\r\g\h\m\3\7\n\a\w\m\m\e\x\4\d\t\c\e\5\m\0\9\h\j\j\a\5\y\0\5\9\c\g\g\a\b\i\6\g\w\k\o\p\3\m\m\g\n\1\2\y\e\f\a\u\k\b\w\b\e\f\y\t\9\9\u\8\o\r\x\r\p\5\i\w\s\c\8\1\o\o\a\5\d\z\p\w\t\6\j\u\r\q\u\b\o\l\k\k\9\4\o\9\j\m\o\5\o\7\g\9\d\2\g\i\6\k\c\f\f\o\4\g\2\z\s\7\p\m\e\y\3\p\n\r\k\8\b\9\4\j\a\y\t\g\6\7\j\z\k\t\h\3\6\5\a\n\x\2\g\w\7\s\8\x\i\u\3\t\t\1\x\a\u\x\r\t\v\h\b\j\5\f\7\b\y\0\d\r\b\9\6\p\6\a\h\c\d\x\u\x\a\t\5\v\i\8\0\t\2\v\i\q\y\f\6\o\b\z\5\7\c\9\t\x\l\n\q\v\x\0\o\k\m\6\i\e\s\a\x\d\h\2\k\g\q\k\3\k\d\o\l\o\b\i\f\j\q\o\u\6\t\2\n\6\8\m\7\z\t\v\v\h\5\g\1\e\v\x\r\3\0\i\x\h\y\y\s\5\v\2\p\x\m\q\q\q\p\4\1\2\3 ]] 00:08:32.055 00:08:32.055 real 0m1.796s 00:08:32.055 user 0m1.015s 00:08:32.055 sys 0m0.602s 00:08:32.055 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.055 03:58:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:32.313 ************************************ 00:08:32.313 START TEST dd_flag_noatime 00:08:32.313 ************************************ 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733716693 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733716693 00:08:32.313 03:58:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:33.250 03:58:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.250 [2024-12-09 03:58:15.134220] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:33.250 [2024-12-09 03:58:15.134576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:08:33.508 [2024-12-09 03:58:15.288631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.508 [2024-12-09 03:58:15.356041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.508 [2024-12-09 03:58:15.416435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.766  [2024-12-09T03:58:15.716Z] Copying: 512/512 [B] (average 500 kBps) 00:08:33.766 00:08:33.766 03:58:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.766 03:58:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733716693 )) 00:08:33.766 03:58:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:33.766 03:58:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733716693 )) 00:08:33.766 03:58:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.025 [2024-12-09 03:58:15.729542] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:34.025 [2024-12-09 03:58:15.729645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60548 ] 00:08:34.025 [2024-12-09 03:58:15.876272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.025 [2024-12-09 03:58:15.944872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.284 [2024-12-09 03:58:15.998621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.284  [2024-12-09T03:58:16.234Z] Copying: 512/512 [B] (average 500 kBps) 00:08:34.284 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733716696 )) 00:08:34.542 00:08:34.542 real 0m2.191s 00:08:34.542 user 0m0.640s 00:08:34.542 sys 0m0.617s 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:34.542 ************************************ 00:08:34.542 END TEST dd_flag_noatime 00:08:34.542 ************************************ 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:34.542 ************************************ 00:08:34.542 START TEST dd_flags_misc 00:08:34.542 ************************************ 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:34.542 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:34.542 [2024-12-09 03:58:16.372049] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:34.542 [2024-12-09 03:58:16.372493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60577 ] 00:08:34.801 [2024-12-09 03:58:16.524589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.801 [2024-12-09 03:58:16.595588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.801 [2024-12-09 03:58:16.654028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.801  [2024-12-09T03:58:17.009Z] Copying: 512/512 [B] (average 500 kBps) 00:08:35.059 00:08:35.060 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ q9r1ygjxr472uxxmg63eg9the8ic6blsvx768619rl8ixfkgdxyjos2zt9ntc25i3fwcf1wj7oel14u1cez5wsvi5k1e5ia4d9akbbgvj58rsp19ea6b1yw8f2qlmc1ess2eant74iv5ap9zanf6rcfrycair5l731xj6uzfvqoqiwlo4cjot73j2f3svx54mqbxhprwpfgs218573ouznmbksg5exicorradyu0qgvld1sgnc5g5hj2fmnub684kpra2gmni179agpw0ra7n2tglco3y67hhyrcqk1dvcz1jvziewirmpvlrx26uhraap2mlhh8kbdxtjdo33kgwe6apn9cfczj2ezf59lyd7wv7qm2l02rnu9b3ixzhwgl67ot6ek8y98jask0r8znvy67qdqh9it6je0hddeo6n4lxuurp6gxqc8l36ft4pc5vsmjqlf8sbm0p6smzz5okfpeehe3ayojf48r3gdrp5jpo46zayhzfkas0gympf9h == \q\9\r\1\y\g\j\x\r\4\7\2\u\x\x\m\g\6\3\e\g\9\t\h\e\8\i\c\6\b\l\s\v\x\7\6\8\6\1\9\r\l\8\i\x\f\k\g\d\x\y\j\o\s\2\z\t\9\n\t\c\2\5\i\3\f\w\c\f\1\w\j\7\o\e\l\1\4\u\1\c\e\z\5\w\s\v\i\5\k\1\e\5\i\a\4\d\9\a\k\b\b\g\v\j\5\8\r\s\p\1\9\e\a\6\b\1\y\w\8\f\2\q\l\m\c\1\e\s\s\2\e\a\n\t\7\4\i\v\5\a\p\9\z\a\n\f\6\r\c\f\r\y\c\a\i\r\5\l\7\3\1\x\j\6\u\z\f\v\q\o\q\i\w\l\o\4\c\j\o\t\7\3\j\2\f\3\s\v\x\5\4\m\q\b\x\h\p\r\w\p\f\g\s\2\1\8\5\7\3\o\u\z\n\m\b\k\s\g\5\e\x\i\c\o\r\r\a\d\y\u\0\q\g\v\l\d\1\s\g\n\c\5\g\5\h\j\2\f\m\n\u\b\6\8\4\k\p\r\a\2\g\m\n\i\1\7\9\a\g\p\w\0\r\a\7\n\2\t\g\l\c\o\3\y\6\7\h\h\y\r\c\q\k\1\d\v\c\z\1\j\v\z\i\e\w\i\r\m\p\v\l\r\x\2\6\u\h\r\a\a\p\2\m\l\h\h\8\k\b\d\x\t\j\d\o\3\3\k\g\w\e\6\a\p\n\9\c\f\c\z\j\2\e\z\f\5\9\l\y\d\7\w\v\7\q\m\2\l\0\2\r\n\u\9\b\3\i\x\z\h\w\g\l\6\7\o\t\6\e\k\8\y\9\8\j\a\s\k\0\r\8\z\n\v\y\6\7\q\d\q\h\9\i\t\6\j\e\0\h\d\d\e\o\6\n\4\l\x\u\u\r\p\6\g\x\q\c\8\l\3\6\f\t\4\p\c\5\v\s\m\j\q\l\f\8\s\b\m\0\p\6\s\m\z\z\5\o\k\f\p\e\e\h\e\3\a\y\o\j\f\4\8\r\3\g\d\r\p\5\j\p\o\4\6\z\a\y\h\z\f\k\a\s\0\g\y\m\p\f\9\h ]] 00:08:35.060 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.060 03:58:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:35.060 [2024-12-09 03:58:16.945094] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:35.060 [2024-12-09 03:58:16.945237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60586 ] 00:08:35.319 [2024-12-09 03:58:17.101444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.319 [2024-12-09 03:58:17.170884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.319 [2024-12-09 03:58:17.228918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.577  [2024-12-09T03:58:17.527Z] Copying: 512/512 [B] (average 500 kBps) 00:08:35.577 00:08:35.577 03:58:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ q9r1ygjxr472uxxmg63eg9the8ic6blsvx768619rl8ixfkgdxyjos2zt9ntc25i3fwcf1wj7oel14u1cez5wsvi5k1e5ia4d9akbbgvj58rsp19ea6b1yw8f2qlmc1ess2eant74iv5ap9zanf6rcfrycair5l731xj6uzfvqoqiwlo4cjot73j2f3svx54mqbxhprwpfgs218573ouznmbksg5exicorradyu0qgvld1sgnc5g5hj2fmnub684kpra2gmni179agpw0ra7n2tglco3y67hhyrcqk1dvcz1jvziewirmpvlrx26uhraap2mlhh8kbdxtjdo33kgwe6apn9cfczj2ezf59lyd7wv7qm2l02rnu9b3ixzhwgl67ot6ek8y98jask0r8znvy67qdqh9it6je0hddeo6n4lxuurp6gxqc8l36ft4pc5vsmjqlf8sbm0p6smzz5okfpeehe3ayojf48r3gdrp5jpo46zayhzfkas0gympf9h == \q\9\r\1\y\g\j\x\r\4\7\2\u\x\x\m\g\6\3\e\g\9\t\h\e\8\i\c\6\b\l\s\v\x\7\6\8\6\1\9\r\l\8\i\x\f\k\g\d\x\y\j\o\s\2\z\t\9\n\t\c\2\5\i\3\f\w\c\f\1\w\j\7\o\e\l\1\4\u\1\c\e\z\5\w\s\v\i\5\k\1\e\5\i\a\4\d\9\a\k\b\b\g\v\j\5\8\r\s\p\1\9\e\a\6\b\1\y\w\8\f\2\q\l\m\c\1\e\s\s\2\e\a\n\t\7\4\i\v\5\a\p\9\z\a\n\f\6\r\c\f\r\y\c\a\i\r\5\l\7\3\1\x\j\6\u\z\f\v\q\o\q\i\w\l\o\4\c\j\o\t\7\3\j\2\f\3\s\v\x\5\4\m\q\b\x\h\p\r\w\p\f\g\s\2\1\8\5\7\3\o\u\z\n\m\b\k\s\g\5\e\x\i\c\o\r\r\a\d\y\u\0\q\g\v\l\d\1\s\g\n\c\5\g\5\h\j\2\f\m\n\u\b\6\8\4\k\p\r\a\2\g\m\n\i\1\7\9\a\g\p\w\0\r\a\7\n\2\t\g\l\c\o\3\y\6\7\h\h\y\r\c\q\k\1\d\v\c\z\1\j\v\z\i\e\w\i\r\m\p\v\l\r\x\2\6\u\h\r\a\a\p\2\m\l\h\h\8\k\b\d\x\t\j\d\o\3\3\k\g\w\e\6\a\p\n\9\c\f\c\z\j\2\e\z\f\5\9\l\y\d\7\w\v\7\q\m\2\l\0\2\r\n\u\9\b\3\i\x\z\h\w\g\l\6\7\o\t\6\e\k\8\y\9\8\j\a\s\k\0\r\8\z\n\v\y\6\7\q\d\q\h\9\i\t\6\j\e\0\h\d\d\e\o\6\n\4\l\x\u\u\r\p\6\g\x\q\c\8\l\3\6\f\t\4\p\c\5\v\s\m\j\q\l\f\8\s\b\m\0\p\6\s\m\z\z\5\o\k\f\p\e\e\h\e\3\a\y\o\j\f\4\8\r\3\g\d\r\p\5\j\p\o\4\6\z\a\y\h\z\f\k\a\s\0\g\y\m\p\f\9\h ]] 00:08:35.577 03:58:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.577 03:58:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:35.836 [2024-12-09 03:58:17.537260] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:35.836 [2024-12-09 03:58:17.537384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60596 ] 00:08:35.836 [2024-12-09 03:58:17.686666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.836 [2024-12-09 03:58:17.750209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.095 [2024-12-09 03:58:17.807767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.095  [2024-12-09T03:58:18.045Z] Copying: 512/512 [B] (average 125 kBps) 00:08:36.095 00:08:36.095 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ q9r1ygjxr472uxxmg63eg9the8ic6blsvx768619rl8ixfkgdxyjos2zt9ntc25i3fwcf1wj7oel14u1cez5wsvi5k1e5ia4d9akbbgvj58rsp19ea6b1yw8f2qlmc1ess2eant74iv5ap9zanf6rcfrycair5l731xj6uzfvqoqiwlo4cjot73j2f3svx54mqbxhprwpfgs218573ouznmbksg5exicorradyu0qgvld1sgnc5g5hj2fmnub684kpra2gmni179agpw0ra7n2tglco3y67hhyrcqk1dvcz1jvziewirmpvlrx26uhraap2mlhh8kbdxtjdo33kgwe6apn9cfczj2ezf59lyd7wv7qm2l02rnu9b3ixzhwgl67ot6ek8y98jask0r8znvy67qdqh9it6je0hddeo6n4lxuurp6gxqc8l36ft4pc5vsmjqlf8sbm0p6smzz5okfpeehe3ayojf48r3gdrp5jpo46zayhzfkas0gympf9h == \q\9\r\1\y\g\j\x\r\4\7\2\u\x\x\m\g\6\3\e\g\9\t\h\e\8\i\c\6\b\l\s\v\x\7\6\8\6\1\9\r\l\8\i\x\f\k\g\d\x\y\j\o\s\2\z\t\9\n\t\c\2\5\i\3\f\w\c\f\1\w\j\7\o\e\l\1\4\u\1\c\e\z\5\w\s\v\i\5\k\1\e\5\i\a\4\d\9\a\k\b\b\g\v\j\5\8\r\s\p\1\9\e\a\6\b\1\y\w\8\f\2\q\l\m\c\1\e\s\s\2\e\a\n\t\7\4\i\v\5\a\p\9\z\a\n\f\6\r\c\f\r\y\c\a\i\r\5\l\7\3\1\x\j\6\u\z\f\v\q\o\q\i\w\l\o\4\c\j\o\t\7\3\j\2\f\3\s\v\x\5\4\m\q\b\x\h\p\r\w\p\f\g\s\2\1\8\5\7\3\o\u\z\n\m\b\k\s\g\5\e\x\i\c\o\r\r\a\d\y\u\0\q\g\v\l\d\1\s\g\n\c\5\g\5\h\j\2\f\m\n\u\b\6\8\4\k\p\r\a\2\g\m\n\i\1\7\9\a\g\p\w\0\r\a\7\n\2\t\g\l\c\o\3\y\6\7\h\h\y\r\c\q\k\1\d\v\c\z\1\j\v\z\i\e\w\i\r\m\p\v\l\r\x\2\6\u\h\r\a\a\p\2\m\l\h\h\8\k\b\d\x\t\j\d\o\3\3\k\g\w\e\6\a\p\n\9\c\f\c\z\j\2\e\z\f\5\9\l\y\d\7\w\v\7\q\m\2\l\0\2\r\n\u\9\b\3\i\x\z\h\w\g\l\6\7\o\t\6\e\k\8\y\9\8\j\a\s\k\0\r\8\z\n\v\y\6\7\q\d\q\h\9\i\t\6\j\e\0\h\d\d\e\o\6\n\4\l\x\u\u\r\p\6\g\x\q\c\8\l\3\6\f\t\4\p\c\5\v\s\m\j\q\l\f\8\s\b\m\0\p\6\s\m\z\z\5\o\k\f\p\e\e\h\e\3\a\y\o\j\f\4\8\r\3\g\d\r\p\5\j\p\o\4\6\z\a\y\h\z\f\k\a\s\0\g\y\m\p\f\9\h ]] 00:08:36.095 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:36.095 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:36.354 [2024-12-09 03:58:18.101534] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:36.354 [2024-12-09 03:58:18.101931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60605 ] 00:08:36.354 [2024-12-09 03:58:18.251400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.612 [2024-12-09 03:58:18.315651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.612 [2024-12-09 03:58:18.370809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.612  [2024-12-09T03:58:18.820Z] Copying: 512/512 [B] (average 250 kBps) 00:08:36.870 00:08:36.870 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ q9r1ygjxr472uxxmg63eg9the8ic6blsvx768619rl8ixfkgdxyjos2zt9ntc25i3fwcf1wj7oel14u1cez5wsvi5k1e5ia4d9akbbgvj58rsp19ea6b1yw8f2qlmc1ess2eant74iv5ap9zanf6rcfrycair5l731xj6uzfvqoqiwlo4cjot73j2f3svx54mqbxhprwpfgs218573ouznmbksg5exicorradyu0qgvld1sgnc5g5hj2fmnub684kpra2gmni179agpw0ra7n2tglco3y67hhyrcqk1dvcz1jvziewirmpvlrx26uhraap2mlhh8kbdxtjdo33kgwe6apn9cfczj2ezf59lyd7wv7qm2l02rnu9b3ixzhwgl67ot6ek8y98jask0r8znvy67qdqh9it6je0hddeo6n4lxuurp6gxqc8l36ft4pc5vsmjqlf8sbm0p6smzz5okfpeehe3ayojf48r3gdrp5jpo46zayhzfkas0gympf9h == \q\9\r\1\y\g\j\x\r\4\7\2\u\x\x\m\g\6\3\e\g\9\t\h\e\8\i\c\6\b\l\s\v\x\7\6\8\6\1\9\r\l\8\i\x\f\k\g\d\x\y\j\o\s\2\z\t\9\n\t\c\2\5\i\3\f\w\c\f\1\w\j\7\o\e\l\1\4\u\1\c\e\z\5\w\s\v\i\5\k\1\e\5\i\a\4\d\9\a\k\b\b\g\v\j\5\8\r\s\p\1\9\e\a\6\b\1\y\w\8\f\2\q\l\m\c\1\e\s\s\2\e\a\n\t\7\4\i\v\5\a\p\9\z\a\n\f\6\r\c\f\r\y\c\a\i\r\5\l\7\3\1\x\j\6\u\z\f\v\q\o\q\i\w\l\o\4\c\j\o\t\7\3\j\2\f\3\s\v\x\5\4\m\q\b\x\h\p\r\w\p\f\g\s\2\1\8\5\7\3\o\u\z\n\m\b\k\s\g\5\e\x\i\c\o\r\r\a\d\y\u\0\q\g\v\l\d\1\s\g\n\c\5\g\5\h\j\2\f\m\n\u\b\6\8\4\k\p\r\a\2\g\m\n\i\1\7\9\a\g\p\w\0\r\a\7\n\2\t\g\l\c\o\3\y\6\7\h\h\y\r\c\q\k\1\d\v\c\z\1\j\v\z\i\e\w\i\r\m\p\v\l\r\x\2\6\u\h\r\a\a\p\2\m\l\h\h\8\k\b\d\x\t\j\d\o\3\3\k\g\w\e\6\a\p\n\9\c\f\c\z\j\2\e\z\f\5\9\l\y\d\7\w\v\7\q\m\2\l\0\2\r\n\u\9\b\3\i\x\z\h\w\g\l\6\7\o\t\6\e\k\8\y\9\8\j\a\s\k\0\r\8\z\n\v\y\6\7\q\d\q\h\9\i\t\6\j\e\0\h\d\d\e\o\6\n\4\l\x\u\u\r\p\6\g\x\q\c\8\l\3\6\f\t\4\p\c\5\v\s\m\j\q\l\f\8\s\b\m\0\p\6\s\m\z\z\5\o\k\f\p\e\e\h\e\3\a\y\o\j\f\4\8\r\3\g\d\r\p\5\j\p\o\4\6\z\a\y\h\z\f\k\a\s\0\g\y\m\p\f\9\h ]] 00:08:36.870 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:36.870 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:36.870 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:36.870 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:36.870 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:36.870 03:58:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:36.870 [2024-12-09 03:58:18.674370] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:36.870 [2024-12-09 03:58:18.674527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60615 ] 00:08:37.128 [2024-12-09 03:58:18.822413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.128 [2024-12-09 03:58:18.889070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.128 [2024-12-09 03:58:18.944008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.128  [2024-12-09T03:58:19.335Z] Copying: 512/512 [B] (average 500 kBps) 00:08:37.385 00:08:37.385 03:58:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9vvo2u49n0crfynh4sds9r81hihvpc0b380y900untknwhioa6ik2jrw7hc39nc2n52qyp64ahmb5uosuvifigncip8wrh0m37a7h4dkuwthc37uqt7d4tir6b08ybgrdks6jwyfs0ycszioqa26kqmcwo950h5kkgxvru6zxzizfzs2ntfra3qrku2kwtahr5jtf1tgf80goh8r4zykvxc2ptdaozh8d61h8nu6hx1yq4eyak6fbij1dz27uqpyeck5zpq1csm8e9p9gzqp7gc1xz7tbgoduln9oercyavqtepc3ytztrv82rae4ti2rc3qnnaheuvdxdvxb4glzctt70fgbjtfkumwiss5m5fhl3vao3v2hvetexj0qyi1becldlprfyhyerttzlxj37zesq95nz54l2cabme14to55v0f82pf9tmyyguyn44hk1fwggnbwxw3qahjrmt2vcy3hje565cytcy6bf3r5afx5tcf9cmtukqjugize6oz == \9\v\v\o\2\u\4\9\n\0\c\r\f\y\n\h\4\s\d\s\9\r\8\1\h\i\h\v\p\c\0\b\3\8\0\y\9\0\0\u\n\t\k\n\w\h\i\o\a\6\i\k\2\j\r\w\7\h\c\3\9\n\c\2\n\5\2\q\y\p\6\4\a\h\m\b\5\u\o\s\u\v\i\f\i\g\n\c\i\p\8\w\r\h\0\m\3\7\a\7\h\4\d\k\u\w\t\h\c\3\7\u\q\t\7\d\4\t\i\r\6\b\0\8\y\b\g\r\d\k\s\6\j\w\y\f\s\0\y\c\s\z\i\o\q\a\2\6\k\q\m\c\w\o\9\5\0\h\5\k\k\g\x\v\r\u\6\z\x\z\i\z\f\z\s\2\n\t\f\r\a\3\q\r\k\u\2\k\w\t\a\h\r\5\j\t\f\1\t\g\f\8\0\g\o\h\8\r\4\z\y\k\v\x\c\2\p\t\d\a\o\z\h\8\d\6\1\h\8\n\u\6\h\x\1\y\q\4\e\y\a\k\6\f\b\i\j\1\d\z\2\7\u\q\p\y\e\c\k\5\z\p\q\1\c\s\m\8\e\9\p\9\g\z\q\p\7\g\c\1\x\z\7\t\b\g\o\d\u\l\n\9\o\e\r\c\y\a\v\q\t\e\p\c\3\y\t\z\t\r\v\8\2\r\a\e\4\t\i\2\r\c\3\q\n\n\a\h\e\u\v\d\x\d\v\x\b\4\g\l\z\c\t\t\7\0\f\g\b\j\t\f\k\u\m\w\i\s\s\5\m\5\f\h\l\3\v\a\o\3\v\2\h\v\e\t\e\x\j\0\q\y\i\1\b\e\c\l\d\l\p\r\f\y\h\y\e\r\t\t\z\l\x\j\3\7\z\e\s\q\9\5\n\z\5\4\l\2\c\a\b\m\e\1\4\t\o\5\5\v\0\f\8\2\p\f\9\t\m\y\y\g\u\y\n\4\4\h\k\1\f\w\g\g\n\b\w\x\w\3\q\a\h\j\r\m\t\2\v\c\y\3\h\j\e\5\6\5\c\y\t\c\y\6\b\f\3\r\5\a\f\x\5\t\c\f\9\c\m\t\u\k\q\j\u\g\i\z\e\6\o\z ]] 00:08:37.385 03:58:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:37.385 03:58:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:37.385 [2024-12-09 03:58:19.235673] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:37.385 [2024-12-09 03:58:19.235805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60624 ] 00:08:37.645 [2024-12-09 03:58:19.383300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.645 [2024-12-09 03:58:19.448363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.645 [2024-12-09 03:58:19.503943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.645  [2024-12-09T03:58:19.853Z] Copying: 512/512 [B] (average 500 kBps) 00:08:37.903 00:08:37.903 03:58:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9vvo2u49n0crfynh4sds9r81hihvpc0b380y900untknwhioa6ik2jrw7hc39nc2n52qyp64ahmb5uosuvifigncip8wrh0m37a7h4dkuwthc37uqt7d4tir6b08ybgrdks6jwyfs0ycszioqa26kqmcwo950h5kkgxvru6zxzizfzs2ntfra3qrku2kwtahr5jtf1tgf80goh8r4zykvxc2ptdaozh8d61h8nu6hx1yq4eyak6fbij1dz27uqpyeck5zpq1csm8e9p9gzqp7gc1xz7tbgoduln9oercyavqtepc3ytztrv82rae4ti2rc3qnnaheuvdxdvxb4glzctt70fgbjtfkumwiss5m5fhl3vao3v2hvetexj0qyi1becldlprfyhyerttzlxj37zesq95nz54l2cabme14to55v0f82pf9tmyyguyn44hk1fwggnbwxw3qahjrmt2vcy3hje565cytcy6bf3r5afx5tcf9cmtukqjugize6oz == \9\v\v\o\2\u\4\9\n\0\c\r\f\y\n\h\4\s\d\s\9\r\8\1\h\i\h\v\p\c\0\b\3\8\0\y\9\0\0\u\n\t\k\n\w\h\i\o\a\6\i\k\2\j\r\w\7\h\c\3\9\n\c\2\n\5\2\q\y\p\6\4\a\h\m\b\5\u\o\s\u\v\i\f\i\g\n\c\i\p\8\w\r\h\0\m\3\7\a\7\h\4\d\k\u\w\t\h\c\3\7\u\q\t\7\d\4\t\i\r\6\b\0\8\y\b\g\r\d\k\s\6\j\w\y\f\s\0\y\c\s\z\i\o\q\a\2\6\k\q\m\c\w\o\9\5\0\h\5\k\k\g\x\v\r\u\6\z\x\z\i\z\f\z\s\2\n\t\f\r\a\3\q\r\k\u\2\k\w\t\a\h\r\5\j\t\f\1\t\g\f\8\0\g\o\h\8\r\4\z\y\k\v\x\c\2\p\t\d\a\o\z\h\8\d\6\1\h\8\n\u\6\h\x\1\y\q\4\e\y\a\k\6\f\b\i\j\1\d\z\2\7\u\q\p\y\e\c\k\5\z\p\q\1\c\s\m\8\e\9\p\9\g\z\q\p\7\g\c\1\x\z\7\t\b\g\o\d\u\l\n\9\o\e\r\c\y\a\v\q\t\e\p\c\3\y\t\z\t\r\v\8\2\r\a\e\4\t\i\2\r\c\3\q\n\n\a\h\e\u\v\d\x\d\v\x\b\4\g\l\z\c\t\t\7\0\f\g\b\j\t\f\k\u\m\w\i\s\s\5\m\5\f\h\l\3\v\a\o\3\v\2\h\v\e\t\e\x\j\0\q\y\i\1\b\e\c\l\d\l\p\r\f\y\h\y\e\r\t\t\z\l\x\j\3\7\z\e\s\q\9\5\n\z\5\4\l\2\c\a\b\m\e\1\4\t\o\5\5\v\0\f\8\2\p\f\9\t\m\y\y\g\u\y\n\4\4\h\k\1\f\w\g\g\n\b\w\x\w\3\q\a\h\j\r\m\t\2\v\c\y\3\h\j\e\5\6\5\c\y\t\c\y\6\b\f\3\r\5\a\f\x\5\t\c\f\9\c\m\t\u\k\q\j\u\g\i\z\e\6\o\z ]] 00:08:37.903 03:58:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:37.903 03:58:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:37.903 [2024-12-09 03:58:19.788275] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:37.903 [2024-12-09 03:58:19.788694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60634 ] 00:08:38.161 [2024-12-09 03:58:19.940293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.161 [2024-12-09 03:58:20.013924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.161 [2024-12-09 03:58:20.073603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.419  [2024-12-09T03:58:20.369Z] Copying: 512/512 [B] (average 500 kBps) 00:08:38.419 00:08:38.419 03:58:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9vvo2u49n0crfynh4sds9r81hihvpc0b380y900untknwhioa6ik2jrw7hc39nc2n52qyp64ahmb5uosuvifigncip8wrh0m37a7h4dkuwthc37uqt7d4tir6b08ybgrdks6jwyfs0ycszioqa26kqmcwo950h5kkgxvru6zxzizfzs2ntfra3qrku2kwtahr5jtf1tgf80goh8r4zykvxc2ptdaozh8d61h8nu6hx1yq4eyak6fbij1dz27uqpyeck5zpq1csm8e9p9gzqp7gc1xz7tbgoduln9oercyavqtepc3ytztrv82rae4ti2rc3qnnaheuvdxdvxb4glzctt70fgbjtfkumwiss5m5fhl3vao3v2hvetexj0qyi1becldlprfyhyerttzlxj37zesq95nz54l2cabme14to55v0f82pf9tmyyguyn44hk1fwggnbwxw3qahjrmt2vcy3hje565cytcy6bf3r5afx5tcf9cmtukqjugize6oz == \9\v\v\o\2\u\4\9\n\0\c\r\f\y\n\h\4\s\d\s\9\r\8\1\h\i\h\v\p\c\0\b\3\8\0\y\9\0\0\u\n\t\k\n\w\h\i\o\a\6\i\k\2\j\r\w\7\h\c\3\9\n\c\2\n\5\2\q\y\p\6\4\a\h\m\b\5\u\o\s\u\v\i\f\i\g\n\c\i\p\8\w\r\h\0\m\3\7\a\7\h\4\d\k\u\w\t\h\c\3\7\u\q\t\7\d\4\t\i\r\6\b\0\8\y\b\g\r\d\k\s\6\j\w\y\f\s\0\y\c\s\z\i\o\q\a\2\6\k\q\m\c\w\o\9\5\0\h\5\k\k\g\x\v\r\u\6\z\x\z\i\z\f\z\s\2\n\t\f\r\a\3\q\r\k\u\2\k\w\t\a\h\r\5\j\t\f\1\t\g\f\8\0\g\o\h\8\r\4\z\y\k\v\x\c\2\p\t\d\a\o\z\h\8\d\6\1\h\8\n\u\6\h\x\1\y\q\4\e\y\a\k\6\f\b\i\j\1\d\z\2\7\u\q\p\y\e\c\k\5\z\p\q\1\c\s\m\8\e\9\p\9\g\z\q\p\7\g\c\1\x\z\7\t\b\g\o\d\u\l\n\9\o\e\r\c\y\a\v\q\t\e\p\c\3\y\t\z\t\r\v\8\2\r\a\e\4\t\i\2\r\c\3\q\n\n\a\h\e\u\v\d\x\d\v\x\b\4\g\l\z\c\t\t\7\0\f\g\b\j\t\f\k\u\m\w\i\s\s\5\m\5\f\h\l\3\v\a\o\3\v\2\h\v\e\t\e\x\j\0\q\y\i\1\b\e\c\l\d\l\p\r\f\y\h\y\e\r\t\t\z\l\x\j\3\7\z\e\s\q\9\5\n\z\5\4\l\2\c\a\b\m\e\1\4\t\o\5\5\v\0\f\8\2\p\f\9\t\m\y\y\g\u\y\n\4\4\h\k\1\f\w\g\g\n\b\w\x\w\3\q\a\h\j\r\m\t\2\v\c\y\3\h\j\e\5\6\5\c\y\t\c\y\6\b\f\3\r\5\a\f\x\5\t\c\f\9\c\m\t\u\k\q\j\u\g\i\z\e\6\o\z ]] 00:08:38.419 03:58:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:38.419 03:58:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:38.687 [2024-12-09 03:58:20.368890] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:38.687 [2024-12-09 03:58:20.369008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60643 ] 00:08:38.687 [2024-12-09 03:58:20.518787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.687 [2024-12-09 03:58:20.584541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.958 [2024-12-09 03:58:20.643615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.958  [2024-12-09T03:58:20.908Z] Copying: 512/512 [B] (average 125 kBps) 00:08:38.958 00:08:38.958 03:58:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9vvo2u49n0crfynh4sds9r81hihvpc0b380y900untknwhioa6ik2jrw7hc39nc2n52qyp64ahmb5uosuvifigncip8wrh0m37a7h4dkuwthc37uqt7d4tir6b08ybgrdks6jwyfs0ycszioqa26kqmcwo950h5kkgxvru6zxzizfzs2ntfra3qrku2kwtahr5jtf1tgf80goh8r4zykvxc2ptdaozh8d61h8nu6hx1yq4eyak6fbij1dz27uqpyeck5zpq1csm8e9p9gzqp7gc1xz7tbgoduln9oercyavqtepc3ytztrv82rae4ti2rc3qnnaheuvdxdvxb4glzctt70fgbjtfkumwiss5m5fhl3vao3v2hvetexj0qyi1becldlprfyhyerttzlxj37zesq95nz54l2cabme14to55v0f82pf9tmyyguyn44hk1fwggnbwxw3qahjrmt2vcy3hje565cytcy6bf3r5afx5tcf9cmtukqjugize6oz == \9\v\v\o\2\u\4\9\n\0\c\r\f\y\n\h\4\s\d\s\9\r\8\1\h\i\h\v\p\c\0\b\3\8\0\y\9\0\0\u\n\t\k\n\w\h\i\o\a\6\i\k\2\j\r\w\7\h\c\3\9\n\c\2\n\5\2\q\y\p\6\4\a\h\m\b\5\u\o\s\u\v\i\f\i\g\n\c\i\p\8\w\r\h\0\m\3\7\a\7\h\4\d\k\u\w\t\h\c\3\7\u\q\t\7\d\4\t\i\r\6\b\0\8\y\b\g\r\d\k\s\6\j\w\y\f\s\0\y\c\s\z\i\o\q\a\2\6\k\q\m\c\w\o\9\5\0\h\5\k\k\g\x\v\r\u\6\z\x\z\i\z\f\z\s\2\n\t\f\r\a\3\q\r\k\u\2\k\w\t\a\h\r\5\j\t\f\1\t\g\f\8\0\g\o\h\8\r\4\z\y\k\v\x\c\2\p\t\d\a\o\z\h\8\d\6\1\h\8\n\u\6\h\x\1\y\q\4\e\y\a\k\6\f\b\i\j\1\d\z\2\7\u\q\p\y\e\c\k\5\z\p\q\1\c\s\m\8\e\9\p\9\g\z\q\p\7\g\c\1\x\z\7\t\b\g\o\d\u\l\n\9\o\e\r\c\y\a\v\q\t\e\p\c\3\y\t\z\t\r\v\8\2\r\a\e\4\t\i\2\r\c\3\q\n\n\a\h\e\u\v\d\x\d\v\x\b\4\g\l\z\c\t\t\7\0\f\g\b\j\t\f\k\u\m\w\i\s\s\5\m\5\f\h\l\3\v\a\o\3\v\2\h\v\e\t\e\x\j\0\q\y\i\1\b\e\c\l\d\l\p\r\f\y\h\y\e\r\t\t\z\l\x\j\3\7\z\e\s\q\9\5\n\z\5\4\l\2\c\a\b\m\e\1\4\t\o\5\5\v\0\f\8\2\p\f\9\t\m\y\y\g\u\y\n\4\4\h\k\1\f\w\g\g\n\b\w\x\w\3\q\a\h\j\r\m\t\2\v\c\y\3\h\j\e\5\6\5\c\y\t\c\y\6\b\f\3\r\5\a\f\x\5\t\c\f\9\c\m\t\u\k\q\j\u\g\i\z\e\6\o\z ]] 00:08:38.958 00:08:38.958 real 0m4.577s 00:08:38.958 user 0m2.515s 00:08:38.958 sys 0m2.327s 00:08:38.958 03:58:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.958 ************************************ 00:08:38.958 END TEST dd_flags_misc 00:08:38.958 ************************************ 00:08:38.958 03:58:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:39.216 * Second test run, disabling liburing, forcing AIO 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:39.216 ************************************ 00:08:39.216 START TEST dd_flag_append_forced_aio 00:08:39.216 ************************************ 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=i0qj99o2tnk4a51nkuoqu65smlmktjz1 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=hcjobhaj6t49nvgh1ypzztead4vuy5js 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s i0qj99o2tnk4a51nkuoqu65smlmktjz1 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s hcjobhaj6t49nvgh1ypzztead4vuy5js 00:08:39.216 03:58:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:39.216 [2024-12-09 03:58:20.996559] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:39.216 [2024-12-09 03:58:20.997302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60677 ] 00:08:39.216 [2024-12-09 03:58:21.148714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.473 [2024-12-09 03:58:21.219641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.473 [2024-12-09 03:58:21.280725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.473  [2024-12-09T03:58:21.681Z] Copying: 32/32 [B] (average 31 kBps) 00:08:39.731 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ hcjobhaj6t49nvgh1ypzztead4vuy5jsi0qj99o2tnk4a51nkuoqu65smlmktjz1 == \h\c\j\o\b\h\a\j\6\t\4\9\n\v\g\h\1\y\p\z\z\t\e\a\d\4\v\u\y\5\j\s\i\0\q\j\9\9\o\2\t\n\k\4\a\5\1\n\k\u\o\q\u\6\5\s\m\l\m\k\t\j\z\1 ]] 00:08:39.731 00:08:39.731 real 0m0.605s 00:08:39.731 user 0m0.325s 00:08:39.731 sys 0m0.157s 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.731 ************************************ 00:08:39.731 END TEST dd_flag_append_forced_aio 00:08:39.731 ************************************ 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:39.731 ************************************ 00:08:39.731 START TEST dd_flag_directory_forced_aio 00:08:39.731 ************************************ 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.731 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.732 03:58:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.732 [2024-12-09 03:58:21.658647] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:39.732 [2024-12-09 03:58:21.659355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60704 ] 00:08:39.990 [2024-12-09 03:58:21.808232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.990 [2024-12-09 03:58:21.867622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.990 [2024-12-09 03:58:21.923302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.249 [2024-12-09 03:58:21.963162] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:40.249 [2024-12-09 03:58:21.963240] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:40.249 [2024-12-09 03:58:21.963271] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.249 [2024-12-09 03:58:22.086723] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:40.249 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:40.249 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:40.249 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:40.249 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:40.249 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.250 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:40.508 [2024-12-09 03:58:22.217887] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:40.508 [2024-12-09 03:58:22.218524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60713 ] 00:08:40.508 [2024-12-09 03:58:22.366046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.508 [2024-12-09 03:58:22.431280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.767 [2024-12-09 03:58:22.488923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.767 [2024-12-09 03:58:22.530744] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:40.768 [2024-12-09 03:58:22.530824] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:40.768 [2024-12-09 03:58:22.530841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.768 [2024-12-09 03:58:22.655066] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.027 00:08:41.027 real 0m1.127s 00:08:41.027 user 0m0.613s 00:08:41.027 sys 0m0.300s 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:41.027 ************************************ 00:08:41.027 END TEST dd_flag_directory_forced_aio 00:08:41.027 ************************************ 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:41.027 ************************************ 00:08:41.027 START TEST dd_flag_nofollow_forced_aio 00:08:41.027 ************************************ 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.027 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.028 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.028 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.028 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.028 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.028 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.028 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.028 03:58:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.028 [2024-12-09 03:58:22.845009] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:41.028 [2024-12-09 03:58:22.845110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60742 ] 00:08:41.286 [2024-12-09 03:58:22.993399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.286 [2024-12-09 03:58:23.053417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.286 [2024-12-09 03:58:23.110678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.286 [2024-12-09 03:58:23.151154] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:41.286 [2024-12-09 03:58:23.151228] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:41.286 [2024-12-09 03:58:23.151245] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.545 [2024-12-09 03:58:23.272787] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.545 03:58:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:41.545 [2024-12-09 03:58:23.405567] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:41.545 [2024-12-09 03:58:23.405694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60751 ] 00:08:41.803 [2024-12-09 03:58:23.559674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.803 [2024-12-09 03:58:23.654580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.803 [2024-12-09 03:58:23.734686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.061 [2024-12-09 03:58:23.791318] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:42.061 [2024-12-09 03:58:23.791399] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:42.061 [2024-12-09 03:58:23.791421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.061 [2024-12-09 03:58:23.967674] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:42.319 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:42.320 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.320 [2024-12-09 03:58:24.125675] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:42.320 [2024-12-09 03:58:24.126404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60764 ] 00:08:42.577 [2024-12-09 03:58:24.280983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.577 [2024-12-09 03:58:24.370441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.577 [2024-12-09 03:58:24.457071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.577  [2024-12-09T03:58:25.106Z] Copying: 512/512 [B] (average 500 kBps) 00:08:43.156 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ qktl688k4932a0cf943ohow3q6u9zsfx3qz2s1ectezzku3k9sqi6is67zktrd88f95w7c7jtk5mamfqnhhotj8uahmdfivpdw9s3ztl8sesz7puhpcsj7m9bzaw4smwnvanlgvefz8kghkvo7ipl3kz7hsv16k97hjowydfkb1nu3ob8ll2i21zmqj2nyic2halkvrd9u00sazqyp87p3d2n0z6l6tq4cyat16sqndl34n5qqp5umms5enmo7qs1hn0b14okvyu8jfjatdic8g9x5i6y49uqdnrzvwro92tvqjq9gml95f5x54hmsx8c65dljua9z06ryh5wafpigagmxfyhgv2rnk0yc92ap2s5r40p7w532bvabg4s6k8glf3311tzy39xdl98elerzs3yjvupssx07rl7u5lifvg0holvt2pkjqvd6bkjunp4l510is353b1jk8bymdy6lyjccr0wem1rkqzbani0f7w11ggmd8zoa834t1j4z2y == \q\k\t\l\6\8\8\k\4\9\3\2\a\0\c\f\9\4\3\o\h\o\w\3\q\6\u\9\z\s\f\x\3\q\z\2\s\1\e\c\t\e\z\z\k\u\3\k\9\s\q\i\6\i\s\6\7\z\k\t\r\d\8\8\f\9\5\w\7\c\7\j\t\k\5\m\a\m\f\q\n\h\h\o\t\j\8\u\a\h\m\d\f\i\v\p\d\w\9\s\3\z\t\l\8\s\e\s\z\7\p\u\h\p\c\s\j\7\m\9\b\z\a\w\4\s\m\w\n\v\a\n\l\g\v\e\f\z\8\k\g\h\k\v\o\7\i\p\l\3\k\z\7\h\s\v\1\6\k\9\7\h\j\o\w\y\d\f\k\b\1\n\u\3\o\b\8\l\l\2\i\2\1\z\m\q\j\2\n\y\i\c\2\h\a\l\k\v\r\d\9\u\0\0\s\a\z\q\y\p\8\7\p\3\d\2\n\0\z\6\l\6\t\q\4\c\y\a\t\1\6\s\q\n\d\l\3\4\n\5\q\q\p\5\u\m\m\s\5\e\n\m\o\7\q\s\1\h\n\0\b\1\4\o\k\v\y\u\8\j\f\j\a\t\d\i\c\8\g\9\x\5\i\6\y\4\9\u\q\d\n\r\z\v\w\r\o\9\2\t\v\q\j\q\9\g\m\l\9\5\f\5\x\5\4\h\m\s\x\8\c\6\5\d\l\j\u\a\9\z\0\6\r\y\h\5\w\a\f\p\i\g\a\g\m\x\f\y\h\g\v\2\r\n\k\0\y\c\9\2\a\p\2\s\5\r\4\0\p\7\w\5\3\2\b\v\a\b\g\4\s\6\k\8\g\l\f\3\3\1\1\t\z\y\3\9\x\d\l\9\8\e\l\e\r\z\s\3\y\j\v\u\p\s\s\x\0\7\r\l\7\u\5\l\i\f\v\g\0\h\o\l\v\t\2\p\k\j\q\v\d\6\b\k\j\u\n\p\4\l\5\1\0\i\s\3\5\3\b\1\j\k\8\b\y\m\d\y\6\l\y\j\c\c\r\0\w\e\m\1\r\k\q\z\b\a\n\i\0\f\7\w\1\1\g\g\m\d\8\z\o\a\8\3\4\t\1\j\4\z\2\y ]] 00:08:43.156 00:08:43.156 real 0m2.045s 00:08:43.156 user 0m1.175s 00:08:43.156 sys 0m0.532s 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.156 ************************************ 00:08:43.156 END TEST dd_flag_nofollow_forced_aio 00:08:43.156 ************************************ 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:43.156 ************************************ 00:08:43.156 START TEST dd_flag_noatime_forced_aio 00:08:43.156 ************************************ 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733716704 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.156 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733716704 00:08:43.157 03:58:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:44.119 03:58:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.119 [2024-12-09 03:58:25.961751] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:44.119 [2024-12-09 03:58:25.961870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60805 ] 00:08:44.378 [2024-12-09 03:58:26.117314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.378 [2024-12-09 03:58:26.209093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.378 [2024-12-09 03:58:26.288299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.636  [2024-12-09T03:58:26.845Z] Copying: 512/512 [B] (average 500 kBps) 00:08:44.895 00:08:44.895 03:58:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:44.895 03:58:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733716704 )) 00:08:44.895 03:58:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.895 03:58:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733716704 )) 00:08:44.895 03:58:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.895 [2024-12-09 03:58:26.724984] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:44.895 [2024-12-09 03:58:26.725123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60822 ] 00:08:45.153 [2024-12-09 03:58:26.872528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.153 [2024-12-09 03:58:26.944543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.153 [2024-12-09 03:58:27.023678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.153  [2024-12-09T03:58:27.671Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.721 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.721 ************************************ 00:08:45.721 END TEST dd_flag_noatime_forced_aio 00:08:45.721 ************************************ 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733716707 )) 00:08:45.721 00:08:45.721 real 0m2.504s 00:08:45.721 user 0m0.855s 00:08:45.721 sys 0m0.403s 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:45.721 ************************************ 00:08:45.721 START TEST dd_flags_misc_forced_aio 00:08:45.721 ************************************ 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:45.721 03:58:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:45.721 [2024-12-09 03:58:27.504282] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:45.721 [2024-12-09 03:58:27.504379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60848 ] 00:08:45.721 [2024-12-09 03:58:27.648706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.978 [2024-12-09 03:58:27.723089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.978 [2024-12-09 03:58:27.799074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.978  [2024-12-09T03:58:28.187Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.237 00:08:46.237 03:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 17xysvjcmrugty3qn7otz1l4wbzhm22ai1uhxnt0951ndezns7apzzo1cky1c4sduh6vgef4mv6zgfbjg5elr632siss8693uaj6beu8bn2lgviytjva30sh5fj0njivhyaualnrli7hsjg2tzkzdzjyirli4rv1qsalo3najxm3r92zu63w4b1yk2x1jy1m8ungzu8zft2updp4nipgo1pwz06u1uakmfadpbvzi75sorhe3glebvjuj2yrbpzok5ncj91atdlg3naoz2wz42bp0rvejq38o54ocpeyhct21fuzyetj43e315k1qdcsubk8z0z30x2whvpy84ssl3jn7i7ymqsdm26w047pf1il033nwbs2wiza2obqli5qap7k67wj1ab4w2mlzda7fzwlehuqskwer9i62qjpmihtq3ffiac9tpjlrqjbhlephx9f5tgg6okuocjgm58e2hvuan5031op1thr74wzxngyeytfu36hbo3cssqdwkio == \1\7\x\y\s\v\j\c\m\r\u\g\t\y\3\q\n\7\o\t\z\1\l\4\w\b\z\h\m\2\2\a\i\1\u\h\x\n\t\0\9\5\1\n\d\e\z\n\s\7\a\p\z\z\o\1\c\k\y\1\c\4\s\d\u\h\6\v\g\e\f\4\m\v\6\z\g\f\b\j\g\5\e\l\r\6\3\2\s\i\s\s\8\6\9\3\u\a\j\6\b\e\u\8\b\n\2\l\g\v\i\y\t\j\v\a\3\0\s\h\5\f\j\0\n\j\i\v\h\y\a\u\a\l\n\r\l\i\7\h\s\j\g\2\t\z\k\z\d\z\j\y\i\r\l\i\4\r\v\1\q\s\a\l\o\3\n\a\j\x\m\3\r\9\2\z\u\6\3\w\4\b\1\y\k\2\x\1\j\y\1\m\8\u\n\g\z\u\8\z\f\t\2\u\p\d\p\4\n\i\p\g\o\1\p\w\z\0\6\u\1\u\a\k\m\f\a\d\p\b\v\z\i\7\5\s\o\r\h\e\3\g\l\e\b\v\j\u\j\2\y\r\b\p\z\o\k\5\n\c\j\9\1\a\t\d\l\g\3\n\a\o\z\2\w\z\4\2\b\p\0\r\v\e\j\q\3\8\o\5\4\o\c\p\e\y\h\c\t\2\1\f\u\z\y\e\t\j\4\3\e\3\1\5\k\1\q\d\c\s\u\b\k\8\z\0\z\3\0\x\2\w\h\v\p\y\8\4\s\s\l\3\j\n\7\i\7\y\m\q\s\d\m\2\6\w\0\4\7\p\f\1\i\l\0\3\3\n\w\b\s\2\w\i\z\a\2\o\b\q\l\i\5\q\a\p\7\k\6\7\w\j\1\a\b\4\w\2\m\l\z\d\a\7\f\z\w\l\e\h\u\q\s\k\w\e\r\9\i\6\2\q\j\p\m\i\h\t\q\3\f\f\i\a\c\9\t\p\j\l\r\q\j\b\h\l\e\p\h\x\9\f\5\t\g\g\6\o\k\u\o\c\j\g\m\5\8\e\2\h\v\u\a\n\5\0\3\1\o\p\1\t\h\r\7\4\w\z\x\n\g\y\e\y\t\f\u\3\6\h\b\o\3\c\s\s\q\d\w\k\i\o ]] 00:08:46.237 03:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:46.237 03:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:46.496 [2024-12-09 03:58:28.214871] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:46.496 [2024-12-09 03:58:28.215013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60856 ] 00:08:46.496 [2024-12-09 03:58:28.361912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.754 [2024-12-09 03:58:28.447379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.754 [2024-12-09 03:58:28.531322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.754  [2024-12-09T03:58:28.961Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.011 00:08:47.011 03:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 17xysvjcmrugty3qn7otz1l4wbzhm22ai1uhxnt0951ndezns7apzzo1cky1c4sduh6vgef4mv6zgfbjg5elr632siss8693uaj6beu8bn2lgviytjva30sh5fj0njivhyaualnrli7hsjg2tzkzdzjyirli4rv1qsalo3najxm3r92zu63w4b1yk2x1jy1m8ungzu8zft2updp4nipgo1pwz06u1uakmfadpbvzi75sorhe3glebvjuj2yrbpzok5ncj91atdlg3naoz2wz42bp0rvejq38o54ocpeyhct21fuzyetj43e315k1qdcsubk8z0z30x2whvpy84ssl3jn7i7ymqsdm26w047pf1il033nwbs2wiza2obqli5qap7k67wj1ab4w2mlzda7fzwlehuqskwer9i62qjpmihtq3ffiac9tpjlrqjbhlephx9f5tgg6okuocjgm58e2hvuan5031op1thr74wzxngyeytfu36hbo3cssqdwkio == \1\7\x\y\s\v\j\c\m\r\u\g\t\y\3\q\n\7\o\t\z\1\l\4\w\b\z\h\m\2\2\a\i\1\u\h\x\n\t\0\9\5\1\n\d\e\z\n\s\7\a\p\z\z\o\1\c\k\y\1\c\4\s\d\u\h\6\v\g\e\f\4\m\v\6\z\g\f\b\j\g\5\e\l\r\6\3\2\s\i\s\s\8\6\9\3\u\a\j\6\b\e\u\8\b\n\2\l\g\v\i\y\t\j\v\a\3\0\s\h\5\f\j\0\n\j\i\v\h\y\a\u\a\l\n\r\l\i\7\h\s\j\g\2\t\z\k\z\d\z\j\y\i\r\l\i\4\r\v\1\q\s\a\l\o\3\n\a\j\x\m\3\r\9\2\z\u\6\3\w\4\b\1\y\k\2\x\1\j\y\1\m\8\u\n\g\z\u\8\z\f\t\2\u\p\d\p\4\n\i\p\g\o\1\p\w\z\0\6\u\1\u\a\k\m\f\a\d\p\b\v\z\i\7\5\s\o\r\h\e\3\g\l\e\b\v\j\u\j\2\y\r\b\p\z\o\k\5\n\c\j\9\1\a\t\d\l\g\3\n\a\o\z\2\w\z\4\2\b\p\0\r\v\e\j\q\3\8\o\5\4\o\c\p\e\y\h\c\t\2\1\f\u\z\y\e\t\j\4\3\e\3\1\5\k\1\q\d\c\s\u\b\k\8\z\0\z\3\0\x\2\w\h\v\p\y\8\4\s\s\l\3\j\n\7\i\7\y\m\q\s\d\m\2\6\w\0\4\7\p\f\1\i\l\0\3\3\n\w\b\s\2\w\i\z\a\2\o\b\q\l\i\5\q\a\p\7\k\6\7\w\j\1\a\b\4\w\2\m\l\z\d\a\7\f\z\w\l\e\h\u\q\s\k\w\e\r\9\i\6\2\q\j\p\m\i\h\t\q\3\f\f\i\a\c\9\t\p\j\l\r\q\j\b\h\l\e\p\h\x\9\f\5\t\g\g\6\o\k\u\o\c\j\g\m\5\8\e\2\h\v\u\a\n\5\0\3\1\o\p\1\t\h\r\7\4\w\z\x\n\g\y\e\y\t\f\u\3\6\h\b\o\3\c\s\s\q\d\w\k\i\o ]] 00:08:47.011 03:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.011 03:58:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:47.011 [2024-12-09 03:58:28.957591] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:47.011 [2024-12-09 03:58:28.957721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60869 ] 00:08:47.270 [2024-12-09 03:58:29.101125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.270 [2024-12-09 03:58:29.181642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.527 [2024-12-09 03:58:29.261215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.527  [2024-12-09T03:58:29.734Z] Copying: 512/512 [B] (average 166 kBps) 00:08:47.784 00:08:47.784 03:58:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 17xysvjcmrugty3qn7otz1l4wbzhm22ai1uhxnt0951ndezns7apzzo1cky1c4sduh6vgef4mv6zgfbjg5elr632siss8693uaj6beu8bn2lgviytjva30sh5fj0njivhyaualnrli7hsjg2tzkzdzjyirli4rv1qsalo3najxm3r92zu63w4b1yk2x1jy1m8ungzu8zft2updp4nipgo1pwz06u1uakmfadpbvzi75sorhe3glebvjuj2yrbpzok5ncj91atdlg3naoz2wz42bp0rvejq38o54ocpeyhct21fuzyetj43e315k1qdcsubk8z0z30x2whvpy84ssl3jn7i7ymqsdm26w047pf1il033nwbs2wiza2obqli5qap7k67wj1ab4w2mlzda7fzwlehuqskwer9i62qjpmihtq3ffiac9tpjlrqjbhlephx9f5tgg6okuocjgm58e2hvuan5031op1thr74wzxngyeytfu36hbo3cssqdwkio == \1\7\x\y\s\v\j\c\m\r\u\g\t\y\3\q\n\7\o\t\z\1\l\4\w\b\z\h\m\2\2\a\i\1\u\h\x\n\t\0\9\5\1\n\d\e\z\n\s\7\a\p\z\z\o\1\c\k\y\1\c\4\s\d\u\h\6\v\g\e\f\4\m\v\6\z\g\f\b\j\g\5\e\l\r\6\3\2\s\i\s\s\8\6\9\3\u\a\j\6\b\e\u\8\b\n\2\l\g\v\i\y\t\j\v\a\3\0\s\h\5\f\j\0\n\j\i\v\h\y\a\u\a\l\n\r\l\i\7\h\s\j\g\2\t\z\k\z\d\z\j\y\i\r\l\i\4\r\v\1\q\s\a\l\o\3\n\a\j\x\m\3\r\9\2\z\u\6\3\w\4\b\1\y\k\2\x\1\j\y\1\m\8\u\n\g\z\u\8\z\f\t\2\u\p\d\p\4\n\i\p\g\o\1\p\w\z\0\6\u\1\u\a\k\m\f\a\d\p\b\v\z\i\7\5\s\o\r\h\e\3\g\l\e\b\v\j\u\j\2\y\r\b\p\z\o\k\5\n\c\j\9\1\a\t\d\l\g\3\n\a\o\z\2\w\z\4\2\b\p\0\r\v\e\j\q\3\8\o\5\4\o\c\p\e\y\h\c\t\2\1\f\u\z\y\e\t\j\4\3\e\3\1\5\k\1\q\d\c\s\u\b\k\8\z\0\z\3\0\x\2\w\h\v\p\y\8\4\s\s\l\3\j\n\7\i\7\y\m\q\s\d\m\2\6\w\0\4\7\p\f\1\i\l\0\3\3\n\w\b\s\2\w\i\z\a\2\o\b\q\l\i\5\q\a\p\7\k\6\7\w\j\1\a\b\4\w\2\m\l\z\d\a\7\f\z\w\l\e\h\u\q\s\k\w\e\r\9\i\6\2\q\j\p\m\i\h\t\q\3\f\f\i\a\c\9\t\p\j\l\r\q\j\b\h\l\e\p\h\x\9\f\5\t\g\g\6\o\k\u\o\c\j\g\m\5\8\e\2\h\v\u\a\n\5\0\3\1\o\p\1\t\h\r\7\4\w\z\x\n\g\y\e\y\t\f\u\3\6\h\b\o\3\c\s\s\q\d\w\k\i\o ]] 00:08:47.784 03:58:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.784 03:58:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:47.784 [2024-12-09 03:58:29.688288] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:47.784 [2024-12-09 03:58:29.688418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60882 ] 00:08:48.041 [2024-12-09 03:58:29.827858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.041 [2024-12-09 03:58:29.905617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.041 [2024-12-09 03:58:29.980713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.297  [2024-12-09T03:58:30.505Z] Copying: 512/512 [B] (average 250 kBps) 00:08:48.555 00:08:48.555 03:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 17xysvjcmrugty3qn7otz1l4wbzhm22ai1uhxnt0951ndezns7apzzo1cky1c4sduh6vgef4mv6zgfbjg5elr632siss8693uaj6beu8bn2lgviytjva30sh5fj0njivhyaualnrli7hsjg2tzkzdzjyirli4rv1qsalo3najxm3r92zu63w4b1yk2x1jy1m8ungzu8zft2updp4nipgo1pwz06u1uakmfadpbvzi75sorhe3glebvjuj2yrbpzok5ncj91atdlg3naoz2wz42bp0rvejq38o54ocpeyhct21fuzyetj43e315k1qdcsubk8z0z30x2whvpy84ssl3jn7i7ymqsdm26w047pf1il033nwbs2wiza2obqli5qap7k67wj1ab4w2mlzda7fzwlehuqskwer9i62qjpmihtq3ffiac9tpjlrqjbhlephx9f5tgg6okuocjgm58e2hvuan5031op1thr74wzxngyeytfu36hbo3cssqdwkio == \1\7\x\y\s\v\j\c\m\r\u\g\t\y\3\q\n\7\o\t\z\1\l\4\w\b\z\h\m\2\2\a\i\1\u\h\x\n\t\0\9\5\1\n\d\e\z\n\s\7\a\p\z\z\o\1\c\k\y\1\c\4\s\d\u\h\6\v\g\e\f\4\m\v\6\z\g\f\b\j\g\5\e\l\r\6\3\2\s\i\s\s\8\6\9\3\u\a\j\6\b\e\u\8\b\n\2\l\g\v\i\y\t\j\v\a\3\0\s\h\5\f\j\0\n\j\i\v\h\y\a\u\a\l\n\r\l\i\7\h\s\j\g\2\t\z\k\z\d\z\j\y\i\r\l\i\4\r\v\1\q\s\a\l\o\3\n\a\j\x\m\3\r\9\2\z\u\6\3\w\4\b\1\y\k\2\x\1\j\y\1\m\8\u\n\g\z\u\8\z\f\t\2\u\p\d\p\4\n\i\p\g\o\1\p\w\z\0\6\u\1\u\a\k\m\f\a\d\p\b\v\z\i\7\5\s\o\r\h\e\3\g\l\e\b\v\j\u\j\2\y\r\b\p\z\o\k\5\n\c\j\9\1\a\t\d\l\g\3\n\a\o\z\2\w\z\4\2\b\p\0\r\v\e\j\q\3\8\o\5\4\o\c\p\e\y\h\c\t\2\1\f\u\z\y\e\t\j\4\3\e\3\1\5\k\1\q\d\c\s\u\b\k\8\z\0\z\3\0\x\2\w\h\v\p\y\8\4\s\s\l\3\j\n\7\i\7\y\m\q\s\d\m\2\6\w\0\4\7\p\f\1\i\l\0\3\3\n\w\b\s\2\w\i\z\a\2\o\b\q\l\i\5\q\a\p\7\k\6\7\w\j\1\a\b\4\w\2\m\l\z\d\a\7\f\z\w\l\e\h\u\q\s\k\w\e\r\9\i\6\2\q\j\p\m\i\h\t\q\3\f\f\i\a\c\9\t\p\j\l\r\q\j\b\h\l\e\p\h\x\9\f\5\t\g\g\6\o\k\u\o\c\j\g\m\5\8\e\2\h\v\u\a\n\5\0\3\1\o\p\1\t\h\r\7\4\w\z\x\n\g\y\e\y\t\f\u\3\6\h\b\o\3\c\s\s\q\d\w\k\i\o ]] 00:08:48.555 03:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:48.555 03:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:48.555 03:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:48.555 03:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:48.555 03:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.555 03:58:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:48.555 [2024-12-09 03:58:30.405480] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:48.555 [2024-12-09 03:58:30.405928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60884 ] 00:08:48.813 [2024-12-09 03:58:30.555553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.813 [2024-12-09 03:58:30.633802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.813 [2024-12-09 03:58:30.714713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.070  [2024-12-09T03:58:31.279Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.329 00:08:49.329 03:58:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ akjvmy0b4wwaynka7fubwsd9peqc4j7fy65staqve3ml61ady3ri6ibd3v2y7bg1bur0pxp4dt92twpppd9acvt3cank3tslkeaw5bud98id7iw0jcyypaz9fbj2bz27lepd6zwe460wzhdj3sqmrig2kb9f5cthgjsgyvcwwo8euq1ovhp3gsml6f9hdwsjttntv5d23syrehu8r9qqilv0p8foo0lhrh0fqf2szyg0umt3imzw6xepprkwsozzmp0dfzxy12r42a40ylwvrxyo4bs5w5elt2hj13kydnzw5gnlhjeq3hnihpczbgtocos4j5iukc4yp4hp1bqxt8gofaiwhlzmf1ihlducu8n70viogwctqczbulw25l14wo3f76ju7g6zzcyx92at12c54xj8t2m1c4t936xyy4leb3xz4w4i26sfwurrms0uis7m6rq6qqgrp1276zmj5c76mhiz7tf5kdth37l2vgjd7p6i0fwnrkdvddxj4n48 == \a\k\j\v\m\y\0\b\4\w\w\a\y\n\k\a\7\f\u\b\w\s\d\9\p\e\q\c\4\j\7\f\y\6\5\s\t\a\q\v\e\3\m\l\6\1\a\d\y\3\r\i\6\i\b\d\3\v\2\y\7\b\g\1\b\u\r\0\p\x\p\4\d\t\9\2\t\w\p\p\p\d\9\a\c\v\t\3\c\a\n\k\3\t\s\l\k\e\a\w\5\b\u\d\9\8\i\d\7\i\w\0\j\c\y\y\p\a\z\9\f\b\j\2\b\z\2\7\l\e\p\d\6\z\w\e\4\6\0\w\z\h\d\j\3\s\q\m\r\i\g\2\k\b\9\f\5\c\t\h\g\j\s\g\y\v\c\w\w\o\8\e\u\q\1\o\v\h\p\3\g\s\m\l\6\f\9\h\d\w\s\j\t\t\n\t\v\5\d\2\3\s\y\r\e\h\u\8\r\9\q\q\i\l\v\0\p\8\f\o\o\0\l\h\r\h\0\f\q\f\2\s\z\y\g\0\u\m\t\3\i\m\z\w\6\x\e\p\p\r\k\w\s\o\z\z\m\p\0\d\f\z\x\y\1\2\r\4\2\a\4\0\y\l\w\v\r\x\y\o\4\b\s\5\w\5\e\l\t\2\h\j\1\3\k\y\d\n\z\w\5\g\n\l\h\j\e\q\3\h\n\i\h\p\c\z\b\g\t\o\c\o\s\4\j\5\i\u\k\c\4\y\p\4\h\p\1\b\q\x\t\8\g\o\f\a\i\w\h\l\z\m\f\1\i\h\l\d\u\c\u\8\n\7\0\v\i\o\g\w\c\t\q\c\z\b\u\l\w\2\5\l\1\4\w\o\3\f\7\6\j\u\7\g\6\z\z\c\y\x\9\2\a\t\1\2\c\5\4\x\j\8\t\2\m\1\c\4\t\9\3\6\x\y\y\4\l\e\b\3\x\z\4\w\4\i\2\6\s\f\w\u\r\r\m\s\0\u\i\s\7\m\6\r\q\6\q\q\g\r\p\1\2\7\6\z\m\j\5\c\7\6\m\h\i\z\7\t\f\5\k\d\t\h\3\7\l\2\v\g\j\d\7\p\6\i\0\f\w\n\r\k\d\v\d\d\x\j\4\n\4\8 ]] 00:08:49.329 03:58:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.329 03:58:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:49.329 [2024-12-09 03:58:31.108886] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:49.329 [2024-12-09 03:58:31.109001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60900 ] 00:08:49.329 [2024-12-09 03:58:31.254604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.587 [2024-12-09 03:58:31.310487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.587 [2024-12-09 03:58:31.394142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.587  [2024-12-09T03:58:31.796Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.846 00:08:49.846 03:58:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ akjvmy0b4wwaynka7fubwsd9peqc4j7fy65staqve3ml61ady3ri6ibd3v2y7bg1bur0pxp4dt92twpppd9acvt3cank3tslkeaw5bud98id7iw0jcyypaz9fbj2bz27lepd6zwe460wzhdj3sqmrig2kb9f5cthgjsgyvcwwo8euq1ovhp3gsml6f9hdwsjttntv5d23syrehu8r9qqilv0p8foo0lhrh0fqf2szyg0umt3imzw6xepprkwsozzmp0dfzxy12r42a40ylwvrxyo4bs5w5elt2hj13kydnzw5gnlhjeq3hnihpczbgtocos4j5iukc4yp4hp1bqxt8gofaiwhlzmf1ihlducu8n70viogwctqczbulw25l14wo3f76ju7g6zzcyx92at12c54xj8t2m1c4t936xyy4leb3xz4w4i26sfwurrms0uis7m6rq6qqgrp1276zmj5c76mhiz7tf5kdth37l2vgjd7p6i0fwnrkdvddxj4n48 == \a\k\j\v\m\y\0\b\4\w\w\a\y\n\k\a\7\f\u\b\w\s\d\9\p\e\q\c\4\j\7\f\y\6\5\s\t\a\q\v\e\3\m\l\6\1\a\d\y\3\r\i\6\i\b\d\3\v\2\y\7\b\g\1\b\u\r\0\p\x\p\4\d\t\9\2\t\w\p\p\p\d\9\a\c\v\t\3\c\a\n\k\3\t\s\l\k\e\a\w\5\b\u\d\9\8\i\d\7\i\w\0\j\c\y\y\p\a\z\9\f\b\j\2\b\z\2\7\l\e\p\d\6\z\w\e\4\6\0\w\z\h\d\j\3\s\q\m\r\i\g\2\k\b\9\f\5\c\t\h\g\j\s\g\y\v\c\w\w\o\8\e\u\q\1\o\v\h\p\3\g\s\m\l\6\f\9\h\d\w\s\j\t\t\n\t\v\5\d\2\3\s\y\r\e\h\u\8\r\9\q\q\i\l\v\0\p\8\f\o\o\0\l\h\r\h\0\f\q\f\2\s\z\y\g\0\u\m\t\3\i\m\z\w\6\x\e\p\p\r\k\w\s\o\z\z\m\p\0\d\f\z\x\y\1\2\r\4\2\a\4\0\y\l\w\v\r\x\y\o\4\b\s\5\w\5\e\l\t\2\h\j\1\3\k\y\d\n\z\w\5\g\n\l\h\j\e\q\3\h\n\i\h\p\c\z\b\g\t\o\c\o\s\4\j\5\i\u\k\c\4\y\p\4\h\p\1\b\q\x\t\8\g\o\f\a\i\w\h\l\z\m\f\1\i\h\l\d\u\c\u\8\n\7\0\v\i\o\g\w\c\t\q\c\z\b\u\l\w\2\5\l\1\4\w\o\3\f\7\6\j\u\7\g\6\z\z\c\y\x\9\2\a\t\1\2\c\5\4\x\j\8\t\2\m\1\c\4\t\9\3\6\x\y\y\4\l\e\b\3\x\z\4\w\4\i\2\6\s\f\w\u\r\r\m\s\0\u\i\s\7\m\6\r\q\6\q\q\g\r\p\1\2\7\6\z\m\j\5\c\7\6\m\h\i\z\7\t\f\5\k\d\t\h\3\7\l\2\v\g\j\d\7\p\6\i\0\f\w\n\r\k\d\v\d\d\x\j\4\n\4\8 ]] 00:08:49.846 03:58:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.846 03:58:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:50.105 [2024-12-09 03:58:31.836951] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:50.105 [2024-12-09 03:58:31.837098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60913 ] 00:08:50.105 [2024-12-09 03:58:31.985862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.364 [2024-12-09 03:58:32.069115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.364 [2024-12-09 03:58:32.145680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.364  [2024-12-09T03:58:32.573Z] Copying: 512/512 [B] (average 100 kBps) 00:08:50.623 00:08:50.623 03:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ akjvmy0b4wwaynka7fubwsd9peqc4j7fy65staqve3ml61ady3ri6ibd3v2y7bg1bur0pxp4dt92twpppd9acvt3cank3tslkeaw5bud98id7iw0jcyypaz9fbj2bz27lepd6zwe460wzhdj3sqmrig2kb9f5cthgjsgyvcwwo8euq1ovhp3gsml6f9hdwsjttntv5d23syrehu8r9qqilv0p8foo0lhrh0fqf2szyg0umt3imzw6xepprkwsozzmp0dfzxy12r42a40ylwvrxyo4bs5w5elt2hj13kydnzw5gnlhjeq3hnihpczbgtocos4j5iukc4yp4hp1bqxt8gofaiwhlzmf1ihlducu8n70viogwctqczbulw25l14wo3f76ju7g6zzcyx92at12c54xj8t2m1c4t936xyy4leb3xz4w4i26sfwurrms0uis7m6rq6qqgrp1276zmj5c76mhiz7tf5kdth37l2vgjd7p6i0fwnrkdvddxj4n48 == \a\k\j\v\m\y\0\b\4\w\w\a\y\n\k\a\7\f\u\b\w\s\d\9\p\e\q\c\4\j\7\f\y\6\5\s\t\a\q\v\e\3\m\l\6\1\a\d\y\3\r\i\6\i\b\d\3\v\2\y\7\b\g\1\b\u\r\0\p\x\p\4\d\t\9\2\t\w\p\p\p\d\9\a\c\v\t\3\c\a\n\k\3\t\s\l\k\e\a\w\5\b\u\d\9\8\i\d\7\i\w\0\j\c\y\y\p\a\z\9\f\b\j\2\b\z\2\7\l\e\p\d\6\z\w\e\4\6\0\w\z\h\d\j\3\s\q\m\r\i\g\2\k\b\9\f\5\c\t\h\g\j\s\g\y\v\c\w\w\o\8\e\u\q\1\o\v\h\p\3\g\s\m\l\6\f\9\h\d\w\s\j\t\t\n\t\v\5\d\2\3\s\y\r\e\h\u\8\r\9\q\q\i\l\v\0\p\8\f\o\o\0\l\h\r\h\0\f\q\f\2\s\z\y\g\0\u\m\t\3\i\m\z\w\6\x\e\p\p\r\k\w\s\o\z\z\m\p\0\d\f\z\x\y\1\2\r\4\2\a\4\0\y\l\w\v\r\x\y\o\4\b\s\5\w\5\e\l\t\2\h\j\1\3\k\y\d\n\z\w\5\g\n\l\h\j\e\q\3\h\n\i\h\p\c\z\b\g\t\o\c\o\s\4\j\5\i\u\k\c\4\y\p\4\h\p\1\b\q\x\t\8\g\o\f\a\i\w\h\l\z\m\f\1\i\h\l\d\u\c\u\8\n\7\0\v\i\o\g\w\c\t\q\c\z\b\u\l\w\2\5\l\1\4\w\o\3\f\7\6\j\u\7\g\6\z\z\c\y\x\9\2\a\t\1\2\c\5\4\x\j\8\t\2\m\1\c\4\t\9\3\6\x\y\y\4\l\e\b\3\x\z\4\w\4\i\2\6\s\f\w\u\r\r\m\s\0\u\i\s\7\m\6\r\q\6\q\q\g\r\p\1\2\7\6\z\m\j\5\c\7\6\m\h\i\z\7\t\f\5\k\d\t\h\3\7\l\2\v\g\j\d\7\p\6\i\0\f\w\n\r\k\d\v\d\d\x\j\4\n\4\8 ]] 00:08:50.623 03:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:50.623 03:58:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:50.623 [2024-12-09 03:58:32.565473] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:50.623 [2024-12-09 03:58:32.565956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60919 ] 00:08:50.882 [2024-12-09 03:58:32.721673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.882 [2024-12-09 03:58:32.816772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.140 [2024-12-09 03:58:32.894754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.140  [2024-12-09T03:58:33.351Z] Copying: 512/512 [B] (average 250 kBps) 00:08:51.401 00:08:51.401 03:58:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ akjvmy0b4wwaynka7fubwsd9peqc4j7fy65staqve3ml61ady3ri6ibd3v2y7bg1bur0pxp4dt92twpppd9acvt3cank3tslkeaw5bud98id7iw0jcyypaz9fbj2bz27lepd6zwe460wzhdj3sqmrig2kb9f5cthgjsgyvcwwo8euq1ovhp3gsml6f9hdwsjttntv5d23syrehu8r9qqilv0p8foo0lhrh0fqf2szyg0umt3imzw6xepprkwsozzmp0dfzxy12r42a40ylwvrxyo4bs5w5elt2hj13kydnzw5gnlhjeq3hnihpczbgtocos4j5iukc4yp4hp1bqxt8gofaiwhlzmf1ihlducu8n70viogwctqczbulw25l14wo3f76ju7g6zzcyx92at12c54xj8t2m1c4t936xyy4leb3xz4w4i26sfwurrms0uis7m6rq6qqgrp1276zmj5c76mhiz7tf5kdth37l2vgjd7p6i0fwnrkdvddxj4n48 == \a\k\j\v\m\y\0\b\4\w\w\a\y\n\k\a\7\f\u\b\w\s\d\9\p\e\q\c\4\j\7\f\y\6\5\s\t\a\q\v\e\3\m\l\6\1\a\d\y\3\r\i\6\i\b\d\3\v\2\y\7\b\g\1\b\u\r\0\p\x\p\4\d\t\9\2\t\w\p\p\p\d\9\a\c\v\t\3\c\a\n\k\3\t\s\l\k\e\a\w\5\b\u\d\9\8\i\d\7\i\w\0\j\c\y\y\p\a\z\9\f\b\j\2\b\z\2\7\l\e\p\d\6\z\w\e\4\6\0\w\z\h\d\j\3\s\q\m\r\i\g\2\k\b\9\f\5\c\t\h\g\j\s\g\y\v\c\w\w\o\8\e\u\q\1\o\v\h\p\3\g\s\m\l\6\f\9\h\d\w\s\j\t\t\n\t\v\5\d\2\3\s\y\r\e\h\u\8\r\9\q\q\i\l\v\0\p\8\f\o\o\0\l\h\r\h\0\f\q\f\2\s\z\y\g\0\u\m\t\3\i\m\z\w\6\x\e\p\p\r\k\w\s\o\z\z\m\p\0\d\f\z\x\y\1\2\r\4\2\a\4\0\y\l\w\v\r\x\y\o\4\b\s\5\w\5\e\l\t\2\h\j\1\3\k\y\d\n\z\w\5\g\n\l\h\j\e\q\3\h\n\i\h\p\c\z\b\g\t\o\c\o\s\4\j\5\i\u\k\c\4\y\p\4\h\p\1\b\q\x\t\8\g\o\f\a\i\w\h\l\z\m\f\1\i\h\l\d\u\c\u\8\n\7\0\v\i\o\g\w\c\t\q\c\z\b\u\l\w\2\5\l\1\4\w\o\3\f\7\6\j\u\7\g\6\z\z\c\y\x\9\2\a\t\1\2\c\5\4\x\j\8\t\2\m\1\c\4\t\9\3\6\x\y\y\4\l\e\b\3\x\z\4\w\4\i\2\6\s\f\w\u\r\r\m\s\0\u\i\s\7\m\6\r\q\6\q\q\g\r\p\1\2\7\6\z\m\j\5\c\7\6\m\h\i\z\7\t\f\5\k\d\t\h\3\7\l\2\v\g\j\d\7\p\6\i\0\f\w\n\r\k\d\v\d\d\x\j\4\n\4\8 ]] 00:08:51.401 00:08:51.401 real 0m5.838s 00:08:51.401 user 0m3.327s 00:08:51.401 sys 0m1.518s 00:08:51.401 ************************************ 00:08:51.401 END TEST dd_flags_misc_forced_aio 00:08:51.401 ************************************ 00:08:51.401 03:58:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.401 03:58:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:51.401 03:58:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:51.401 03:58:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:51.401 03:58:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:51.401 ************************************ 00:08:51.401 END TEST spdk_dd_posix 00:08:51.401 ************************************ 00:08:51.401 00:08:51.401 real 0m23.243s 00:08:51.401 user 0m11.717s 00:08:51.401 sys 0m7.520s 00:08:51.401 03:58:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.401 03:58:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:51.661 03:58:33 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:51.661 03:58:33 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.661 03:58:33 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.661 03:58:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:51.661 ************************************ 00:08:51.661 START TEST spdk_dd_malloc 00:08:51.661 ************************************ 00:08:51.661 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:51.661 * Looking for test storage... 00:08:51.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:51.661 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:51.661 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:51.661 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:51.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.920 --rc genhtml_branch_coverage=1 00:08:51.920 --rc genhtml_function_coverage=1 00:08:51.920 --rc genhtml_legend=1 00:08:51.920 --rc geninfo_all_blocks=1 00:08:51.920 --rc geninfo_unexecuted_blocks=1 00:08:51.920 00:08:51.920 ' 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:51.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.920 --rc genhtml_branch_coverage=1 00:08:51.920 --rc genhtml_function_coverage=1 00:08:51.920 --rc genhtml_legend=1 00:08:51.920 --rc geninfo_all_blocks=1 00:08:51.920 --rc geninfo_unexecuted_blocks=1 00:08:51.920 00:08:51.920 ' 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:51.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.920 --rc genhtml_branch_coverage=1 00:08:51.920 --rc genhtml_function_coverage=1 00:08:51.920 --rc genhtml_legend=1 00:08:51.920 --rc geninfo_all_blocks=1 00:08:51.920 --rc geninfo_unexecuted_blocks=1 00:08:51.920 00:08:51.920 ' 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:51.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.920 --rc genhtml_branch_coverage=1 00:08:51.920 --rc genhtml_function_coverage=1 00:08:51.920 --rc genhtml_legend=1 00:08:51.920 --rc geninfo_all_blocks=1 00:08:51.920 --rc geninfo_unexecuted_blocks=1 00:08:51.920 00:08:51.920 ' 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.920 03:58:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:51.921 ************************************ 00:08:51.921 START TEST dd_malloc_copy 00:08:51.921 ************************************ 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:51.921 03:58:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:51.921 [2024-12-09 03:58:33.734489] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:51.921 [2024-12-09 03:58:33.734881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:08:51.921 { 00:08:51.921 "subsystems": [ 00:08:51.921 { 00:08:51.921 "subsystem": "bdev", 00:08:51.921 "config": [ 00:08:51.921 { 00:08:51.921 "params": { 00:08:51.921 "block_size": 512, 00:08:51.921 "num_blocks": 1048576, 00:08:51.921 "name": "malloc0" 00:08:51.921 }, 00:08:51.921 "method": "bdev_malloc_create" 00:08:51.921 }, 00:08:51.921 { 00:08:51.921 "params": { 00:08:51.921 "block_size": 512, 00:08:51.921 "num_blocks": 1048576, 00:08:51.921 "name": "malloc1" 00:08:51.921 }, 00:08:51.921 "method": "bdev_malloc_create" 00:08:51.921 }, 00:08:51.921 { 00:08:51.921 "method": "bdev_wait_for_examine" 00:08:51.921 } 00:08:51.921 ] 00:08:51.921 } 00:08:51.921 ] 00:08:51.921 } 00:08:52.213 [2024-12-09 03:58:33.879508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.213 [2024-12-09 03:58:33.958059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.213 [2024-12-09 03:58:34.037396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.606  [2024-12-09T03:58:36.928Z] Copying: 179/512 [MB] (179 MBps) [2024-12-09T03:58:37.493Z] Copying: 360/512 [MB] (180 MBps) [2024-12-09T03:58:38.427Z] Copying: 512/512 [MB] (average 183 MBps) 00:08:56.477 00:08:56.477 03:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:56.477 03:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:56.477 03:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:56.477 03:58:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:56.477 { 00:08:56.477 "subsystems": [ 00:08:56.477 { 00:08:56.477 "subsystem": "bdev", 00:08:56.477 "config": [ 00:08:56.477 { 00:08:56.477 "params": { 00:08:56.477 "block_size": 512, 00:08:56.477 "num_blocks": 1048576, 00:08:56.477 "name": "malloc0" 00:08:56.477 }, 00:08:56.477 "method": "bdev_malloc_create" 00:08:56.477 }, 00:08:56.477 { 00:08:56.477 "params": { 00:08:56.477 "block_size": 512, 00:08:56.477 "num_blocks": 1048576, 00:08:56.477 "name": "malloc1" 00:08:56.477 }, 00:08:56.477 "method": "bdev_malloc_create" 00:08:56.477 }, 00:08:56.477 { 00:08:56.477 "method": "bdev_wait_for_examine" 00:08:56.477 } 00:08:56.477 ] 00:08:56.477 } 00:08:56.477 ] 00:08:56.477 } 00:08:56.477 [2024-12-09 03:58:38.217834] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:08:56.477 [2024-12-09 03:58:38.217969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61061 ] 00:08:56.477 [2024-12-09 03:58:38.370787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.735 [2024-12-09 03:58:38.460761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.735 [2024-12-09 03:58:38.541622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.108  [2024-12-09T03:58:41.434Z] Copying: 196/512 [MB] (196 MBps) [2024-12-09T03:58:42.002Z] Copying: 390/512 [MB] (193 MBps) [2024-12-09T03:58:42.941Z] Copying: 512/512 [MB] (average 191 MBps) 00:09:00.991 00:09:00.991 00:09:00.991 real 0m8.918s 00:09:00.991 user 0m7.556s 00:09:00.991 sys 0m1.179s 00:09:00.991 03:58:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.991 03:58:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:00.991 ************************************ 00:09:00.991 END TEST dd_malloc_copy 00:09:00.991 ************************************ 00:09:00.991 ************************************ 00:09:00.991 END TEST spdk_dd_malloc 00:09:00.991 ************************************ 00:09:00.991 00:09:00.991 real 0m9.245s 00:09:00.991 user 0m7.754s 00:09:00.991 sys 0m1.304s 00:09:00.991 03:58:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.991 03:58:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:00.991 03:58:42 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:00.991 03:58:42 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:00.991 03:58:42 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.991 03:58:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:00.991 ************************************ 00:09:00.991 START TEST spdk_dd_bdev_to_bdev 00:09:00.991 ************************************ 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:00.991 * Looking for test storage... 00:09:00.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.991 --rc genhtml_branch_coverage=1 00:09:00.991 --rc genhtml_function_coverage=1 00:09:00.991 --rc genhtml_legend=1 00:09:00.991 --rc geninfo_all_blocks=1 00:09:00.991 --rc geninfo_unexecuted_blocks=1 00:09:00.991 00:09:00.991 ' 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.991 --rc genhtml_branch_coverage=1 00:09:00.991 --rc genhtml_function_coverage=1 00:09:00.991 --rc genhtml_legend=1 00:09:00.991 --rc geninfo_all_blocks=1 00:09:00.991 --rc geninfo_unexecuted_blocks=1 00:09:00.991 00:09:00.991 ' 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.991 --rc genhtml_branch_coverage=1 00:09:00.991 --rc genhtml_function_coverage=1 00:09:00.991 --rc genhtml_legend=1 00:09:00.991 --rc geninfo_all_blocks=1 00:09:00.991 --rc geninfo_unexecuted_blocks=1 00:09:00.991 00:09:00.991 ' 00:09:00.991 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.992 --rc genhtml_branch_coverage=1 00:09:00.992 --rc genhtml_function_coverage=1 00:09:00.992 --rc genhtml_legend=1 00:09:00.992 --rc geninfo_all_blocks=1 00:09:00.992 --rc geninfo_unexecuted_blocks=1 00:09:00.992 00:09:00.992 ' 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:00.992 ************************************ 00:09:00.992 START TEST dd_inflate_file 00:09:00.992 ************************************ 00:09:00.992 03:58:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:01.251 [2024-12-09 03:58:42.946406] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:01.251 [2024-12-09 03:58:42.946690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61185 ] 00:09:01.251 [2024-12-09 03:58:43.089768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.251 [2024-12-09 03:58:43.171339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.510 [2024-12-09 03:58:43.252875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.510  [2024-12-09T03:58:43.719Z] Copying: 64/64 [MB] (average 1306 MBps) 00:09:01.769 00:09:01.769 00:09:01.769 real 0m0.731s 00:09:01.770 user 0m0.437s 00:09:01.770 sys 0m0.408s 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.770 ************************************ 00:09:01.770 END TEST dd_inflate_file 00:09:01.770 ************************************ 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:01.770 ************************************ 00:09:01.770 START TEST dd_copy_to_out_bdev 00:09:01.770 ************************************ 00:09:01.770 03:58:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:02.028 { 00:09:02.028 "subsystems": [ 00:09:02.028 { 00:09:02.028 "subsystem": "bdev", 00:09:02.028 "config": [ 00:09:02.028 { 00:09:02.028 "params": { 00:09:02.028 "trtype": "pcie", 00:09:02.028 "traddr": "0000:00:10.0", 00:09:02.028 "name": "Nvme0" 00:09:02.028 }, 00:09:02.028 "method": "bdev_nvme_attach_controller" 00:09:02.028 }, 00:09:02.028 { 00:09:02.028 "params": { 00:09:02.028 "trtype": "pcie", 00:09:02.028 "traddr": "0000:00:11.0", 00:09:02.028 "name": "Nvme1" 00:09:02.028 }, 00:09:02.028 "method": "bdev_nvme_attach_controller" 00:09:02.028 }, 00:09:02.028 { 00:09:02.028 "method": "bdev_wait_for_examine" 00:09:02.028 } 00:09:02.028 ] 00:09:02.028 } 00:09:02.028 ] 00:09:02.028 } 00:09:02.029 [2024-12-09 03:58:43.751206] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:02.029 [2024-12-09 03:58:43.751318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61218 ] 00:09:02.029 [2024-12-09 03:58:43.897169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.029 [2024-12-09 03:58:43.975323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.288 [2024-12-09 03:58:44.052652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.668  [2024-12-09T03:58:45.618Z] Copying: 54/64 [MB] (54 MBps) [2024-12-09T03:58:45.876Z] Copying: 64/64 [MB] (average 55 MBps) 00:09:03.926 00:09:03.926 00:09:03.926 real 0m2.049s 00:09:03.926 user 0m1.767s 00:09:03.926 sys 0m1.631s 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:03.926 ************************************ 00:09:03.926 END TEST dd_copy_to_out_bdev 00:09:03.926 ************************************ 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:03.926 ************************************ 00:09:03.926 START TEST dd_offset_magic 00:09:03.926 ************************************ 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:03.926 03:58:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:03.926 [2024-12-09 03:58:45.863387] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:03.926 [2024-12-09 03:58:45.863749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61263 ] 00:09:03.926 { 00:09:03.926 "subsystems": [ 00:09:03.926 { 00:09:03.926 "subsystem": "bdev", 00:09:03.926 "config": [ 00:09:03.926 { 00:09:03.926 "params": { 00:09:03.926 "trtype": "pcie", 00:09:03.926 "traddr": "0000:00:10.0", 00:09:03.926 "name": "Nvme0" 00:09:03.926 }, 00:09:03.926 "method": "bdev_nvme_attach_controller" 00:09:03.926 }, 00:09:03.926 { 00:09:03.926 "params": { 00:09:03.926 "trtype": "pcie", 00:09:03.926 "traddr": "0000:00:11.0", 00:09:03.926 "name": "Nvme1" 00:09:03.926 }, 00:09:03.926 "method": "bdev_nvme_attach_controller" 00:09:03.926 }, 00:09:03.926 { 00:09:03.926 "method": "bdev_wait_for_examine" 00:09:03.926 } 00:09:03.926 ] 00:09:03.926 } 00:09:03.926 ] 00:09:03.926 } 00:09:04.183 [2024-12-09 03:58:46.014597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.183 [2024-12-09 03:58:46.085253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.441 [2024-12-09 03:58:46.167645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.699  [2024-12-09T03:58:46.907Z] Copying: 65/65 [MB] (average 942 MBps) 00:09:04.957 00:09:04.957 03:58:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:04.957 03:58:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:04.957 03:58:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:04.957 03:58:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:04.957 [2024-12-09 03:58:46.821403] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:04.957 [2024-12-09 03:58:46.821505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61283 ] 00:09:04.957 { 00:09:04.957 "subsystems": [ 00:09:04.957 { 00:09:04.957 "subsystem": "bdev", 00:09:04.957 "config": [ 00:09:04.957 { 00:09:04.957 "params": { 00:09:04.957 "trtype": "pcie", 00:09:04.957 "traddr": "0000:00:10.0", 00:09:04.957 "name": "Nvme0" 00:09:04.957 }, 00:09:04.957 "method": "bdev_nvme_attach_controller" 00:09:04.957 }, 00:09:04.957 { 00:09:04.957 "params": { 00:09:04.957 "trtype": "pcie", 00:09:04.957 "traddr": "0000:00:11.0", 00:09:04.957 "name": "Nvme1" 00:09:04.957 }, 00:09:04.957 "method": "bdev_nvme_attach_controller" 00:09:04.957 }, 00:09:04.957 { 00:09:04.957 "method": "bdev_wait_for_examine" 00:09:04.957 } 00:09:04.957 ] 00:09:04.957 } 00:09:04.957 ] 00:09:04.957 } 00:09:05.215 [2024-12-09 03:58:46.968912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.215 [2024-12-09 03:58:47.044310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.215 [2024-12-09 03:58:47.125663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.485  [2024-12-09T03:58:47.730Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:05.780 00:09:05.780 03:58:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:05.780 03:58:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:05.780 03:58:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:05.780 03:58:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:05.780 03:58:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:05.780 03:58:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:05.780 03:58:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:05.780 { 00:09:05.780 "subsystems": [ 00:09:05.780 { 00:09:05.780 "subsystem": "bdev", 00:09:05.780 "config": [ 00:09:05.780 { 00:09:05.780 "params": { 00:09:05.780 "trtype": "pcie", 00:09:05.780 "traddr": "0000:00:10.0", 00:09:05.780 "name": "Nvme0" 00:09:05.780 }, 00:09:05.780 "method": "bdev_nvme_attach_controller" 00:09:05.780 }, 00:09:05.780 { 00:09:05.780 "params": { 00:09:05.780 "trtype": "pcie", 00:09:05.780 "traddr": "0000:00:11.0", 00:09:05.780 "name": "Nvme1" 00:09:05.780 }, 00:09:05.780 "method": "bdev_nvme_attach_controller" 00:09:05.780 }, 00:09:05.780 { 00:09:05.780 "method": "bdev_wait_for_examine" 00:09:05.780 } 00:09:05.780 ] 00:09:05.780 } 00:09:05.780 ] 00:09:05.780 } 00:09:05.780 [2024-12-09 03:58:47.682260] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:05.780 [2024-12-09 03:58:47.683288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61305 ] 00:09:06.037 [2024-12-09 03:58:47.830743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.037 [2024-12-09 03:58:47.893552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.037 [2024-12-09 03:58:47.969737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.294  [2024-12-09T03:58:48.816Z] Copying: 65/65 [MB] (average 1031 MBps) 00:09:06.866 00:09:06.866 03:58:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:06.866 03:58:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:06.866 03:58:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:06.866 03:58:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:06.866 [2024-12-09 03:58:48.607988] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:06.866 [2024-12-09 03:58:48.608309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61325 ] 00:09:06.866 { 00:09:06.866 "subsystems": [ 00:09:06.866 { 00:09:06.866 "subsystem": "bdev", 00:09:06.866 "config": [ 00:09:06.866 { 00:09:06.866 "params": { 00:09:06.866 "trtype": "pcie", 00:09:06.866 "traddr": "0000:00:10.0", 00:09:06.866 "name": "Nvme0" 00:09:06.866 }, 00:09:06.866 "method": "bdev_nvme_attach_controller" 00:09:06.866 }, 00:09:06.866 { 00:09:06.866 "params": { 00:09:06.866 "trtype": "pcie", 00:09:06.866 "traddr": "0000:00:11.0", 00:09:06.866 "name": "Nvme1" 00:09:06.866 }, 00:09:06.866 "method": "bdev_nvme_attach_controller" 00:09:06.866 }, 00:09:06.866 { 00:09:06.866 "method": "bdev_wait_for_examine" 00:09:06.866 } 00:09:06.866 ] 00:09:06.866 } 00:09:06.866 ] 00:09:06.866 } 00:09:06.866 [2024-12-09 03:58:48.756365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.124 [2024-12-09 03:58:48.825904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.124 [2024-12-09 03:58:48.907052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.381  [2024-12-09T03:58:49.588Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:07.638 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:07.638 00:09:07.638 real 0m3.606s 00:09:07.638 user 0m2.574s 00:09:07.638 sys 0m1.223s 00:09:07.638 ************************************ 00:09:07.638 END TEST dd_offset_magic 00:09:07.638 ************************************ 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:07.638 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:07.639 03:58:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:07.639 [2024-12-09 03:58:49.507467] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:07.639 [2024-12-09 03:58:49.507567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61362 ] 00:09:07.639 { 00:09:07.639 "subsystems": [ 00:09:07.639 { 00:09:07.639 "subsystem": "bdev", 00:09:07.639 "config": [ 00:09:07.639 { 00:09:07.639 "params": { 00:09:07.639 "trtype": "pcie", 00:09:07.639 "traddr": "0000:00:10.0", 00:09:07.639 "name": "Nvme0" 00:09:07.639 }, 00:09:07.639 "method": "bdev_nvme_attach_controller" 00:09:07.639 }, 00:09:07.639 { 00:09:07.639 "params": { 00:09:07.639 "trtype": "pcie", 00:09:07.639 "traddr": "0000:00:11.0", 00:09:07.639 "name": "Nvme1" 00:09:07.639 }, 00:09:07.639 "method": "bdev_nvme_attach_controller" 00:09:07.639 }, 00:09:07.639 { 00:09:07.639 "method": "bdev_wait_for_examine" 00:09:07.639 } 00:09:07.639 ] 00:09:07.639 } 00:09:07.639 ] 00:09:07.639 } 00:09:07.897 [2024-12-09 03:58:49.656423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.897 [2024-12-09 03:58:49.730682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.897 [2024-12-09 03:58:49.808298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.154  [2024-12-09T03:58:50.362Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:09:08.412 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:08.412 03:58:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:08.412 { 00:09:08.412 "subsystems": [ 00:09:08.412 { 00:09:08.412 "subsystem": "bdev", 00:09:08.412 "config": [ 00:09:08.412 { 00:09:08.412 "params": { 00:09:08.412 "trtype": "pcie", 00:09:08.412 "traddr": "0000:00:10.0", 00:09:08.412 "name": "Nvme0" 00:09:08.412 }, 00:09:08.412 "method": "bdev_nvme_attach_controller" 00:09:08.412 }, 00:09:08.412 { 00:09:08.412 "params": { 00:09:08.412 "trtype": "pcie", 00:09:08.412 "traddr": "0000:00:11.0", 00:09:08.412 "name": "Nvme1" 00:09:08.413 }, 00:09:08.413 "method": "bdev_nvme_attach_controller" 00:09:08.413 }, 00:09:08.413 { 00:09:08.413 "method": "bdev_wait_for_examine" 00:09:08.413 } 00:09:08.413 ] 00:09:08.413 } 00:09:08.413 ] 00:09:08.413 } 00:09:08.413 [2024-12-09 03:58:50.348476] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:08.413 [2024-12-09 03:58:50.348601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61378 ] 00:09:08.671 [2024-12-09 03:58:50.496707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.671 [2024-12-09 03:58:50.557970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.930 [2024-12-09 03:58:50.636806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.930  [2024-12-09T03:58:51.138Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:09:09.188 00:09:09.188 03:58:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:09.188 00:09:09.188 real 0m8.431s 00:09:09.188 user 0m6.110s 00:09:09.188 sys 0m4.150s 00:09:09.188 03:58:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.188 03:58:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:09.188 ************************************ 00:09:09.188 END TEST spdk_dd_bdev_to_bdev 00:09:09.188 ************************************ 00:09:09.448 03:58:51 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:09.448 03:58:51 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:09.448 03:58:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.448 03:58:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.448 03:58:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:09.448 ************************************ 00:09:09.448 START TEST spdk_dd_uring 00:09:09.448 ************************************ 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:09.448 * Looking for test storage... 00:09:09.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.448 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.448 --rc genhtml_branch_coverage=1 00:09:09.448 --rc genhtml_function_coverage=1 00:09:09.448 --rc genhtml_legend=1 00:09:09.448 --rc geninfo_all_blocks=1 00:09:09.449 --rc geninfo_unexecuted_blocks=1 00:09:09.449 00:09:09.449 ' 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.449 --rc genhtml_branch_coverage=1 00:09:09.449 --rc genhtml_function_coverage=1 00:09:09.449 --rc genhtml_legend=1 00:09:09.449 --rc geninfo_all_blocks=1 00:09:09.449 --rc geninfo_unexecuted_blocks=1 00:09:09.449 00:09:09.449 ' 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.449 --rc genhtml_branch_coverage=1 00:09:09.449 --rc genhtml_function_coverage=1 00:09:09.449 --rc genhtml_legend=1 00:09:09.449 --rc geninfo_all_blocks=1 00:09:09.449 --rc geninfo_unexecuted_blocks=1 00:09:09.449 00:09:09.449 ' 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.449 --rc genhtml_branch_coverage=1 00:09:09.449 --rc genhtml_function_coverage=1 00:09:09.449 --rc genhtml_legend=1 00:09:09.449 --rc geninfo_all_blocks=1 00:09:09.449 --rc geninfo_unexecuted_blocks=1 00:09:09.449 00:09:09.449 ' 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:09.449 ************************************ 00:09:09.449 START TEST dd_uring_copy 00:09:09.449 ************************************ 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:09.449 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:09.709 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=uv7tt3l3hw7ht4dg2rs8hesikabsd5w7hs7itnk6sqyduff7qbsaigoxdfnn95d3f74re0urs88ur0kzh1l1ksoj1dnijk6gya40kpzxp2gq26fc0s5cgzdz2ve1ad5jmn2mlf75tws5dgrhowdxzo3slmgqei87ctgodmuxq7lbe7j4bxsk6seckiquayl9v7l9dex3ji9t6dmo9vl5zu4xk8annwx96veerlmnu63d45tlhjh6fxhgpz46ciwqjt7jydkmtxpxqy8c1tdmlbfk2mzichddinvlpsk64o64oft8kyehlq3mirar7ncgugly69g5k8tvkjpd474n2l234dwxmxekm91u0e1nfsz6tuk4u48l80y7oepq913f64i9dsh3e6n014sf7iftxn3d6l1u5jig1dkaf625iwufsylv6f4kv1igr4uysxjflzbz7pvox2o43fewxx13ooei778jqd2jfrhmdc94rd0ue1blqmwssjpwpgzmxaojpfusc8zmcg4j2bdc4jsnpmtm9um41pikes8ycke4m8adlfjdnujna7uith67aukn58o9eujh301oaeldbobp9i5atucs0k2blzxf54wjcex6zj153j291t5au5ei2nw99ra5jzzpbof2luuemnou0166f3ggsos5lkfqvx5nt7ugjm08ts9far0m0da99wuluqhfcene5z43hmspp03h27ne8i0xbqwdnenups5aq7a65veps9lm20edk6nhx791w2pjyfdrak2493n3aey099tuxy9g8xdo0wxa57ragupwcyirgliemq4fhhn2e8vweq95frhlndr3mlj2twr9qrq7zsbz8desvlhmcmhr1f1nneanbmmzqg72haujw65ed8j6yskg6ewtv6u1ow6u4hlaqlgk1n54af7upkh2sjclxrhel9pbe7c9gmxfpypnbvmu0cyy788crf40h26nxdgocrdijw3waidscvh8bxzul0yzouoeofah659ns416 00:09:09.709 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo uv7tt3l3hw7ht4dg2rs8hesikabsd5w7hs7itnk6sqyduff7qbsaigoxdfnn95d3f74re0urs88ur0kzh1l1ksoj1dnijk6gya40kpzxp2gq26fc0s5cgzdz2ve1ad5jmn2mlf75tws5dgrhowdxzo3slmgqei87ctgodmuxq7lbe7j4bxsk6seckiquayl9v7l9dex3ji9t6dmo9vl5zu4xk8annwx96veerlmnu63d45tlhjh6fxhgpz46ciwqjt7jydkmtxpxqy8c1tdmlbfk2mzichddinvlpsk64o64oft8kyehlq3mirar7ncgugly69g5k8tvkjpd474n2l234dwxmxekm91u0e1nfsz6tuk4u48l80y7oepq913f64i9dsh3e6n014sf7iftxn3d6l1u5jig1dkaf625iwufsylv6f4kv1igr4uysxjflzbz7pvox2o43fewxx13ooei778jqd2jfrhmdc94rd0ue1blqmwssjpwpgzmxaojpfusc8zmcg4j2bdc4jsnpmtm9um41pikes8ycke4m8adlfjdnujna7uith67aukn58o9eujh301oaeldbobp9i5atucs0k2blzxf54wjcex6zj153j291t5au5ei2nw99ra5jzzpbof2luuemnou0166f3ggsos5lkfqvx5nt7ugjm08ts9far0m0da99wuluqhfcene5z43hmspp03h27ne8i0xbqwdnenups5aq7a65veps9lm20edk6nhx791w2pjyfdrak2493n3aey099tuxy9g8xdo0wxa57ragupwcyirgliemq4fhhn2e8vweq95frhlndr3mlj2twr9qrq7zsbz8desvlhmcmhr1f1nneanbmmzqg72haujw65ed8j6yskg6ewtv6u1ow6u4hlaqlgk1n54af7upkh2sjclxrhel9pbe7c9gmxfpypnbvmu0cyy788crf40h26nxdgocrdijw3waidscvh8bxzul0yzouoeofah659ns416 00:09:09.709 03:58:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:09.709 [2024-12-09 03:58:51.454330] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:09.709 [2024-12-09 03:58:51.454440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61456 ] 00:09:09.709 [2024-12-09 03:58:51.595250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.709 [2024-12-09 03:58:51.648533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.968 [2024-12-09 03:58:51.722988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.551  [2024-12-09T03:58:53.066Z] Copying: 511/511 [MB] (average 1199 MBps) 00:09:11.116 00:09:11.116 03:58:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:11.116 03:58:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:11.116 03:58:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:11.116 03:58:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:11.116 { 00:09:11.116 "subsystems": [ 00:09:11.116 { 00:09:11.116 "subsystem": "bdev", 00:09:11.116 "config": [ 00:09:11.116 { 00:09:11.116 "params": { 00:09:11.116 "block_size": 512, 00:09:11.116 "num_blocks": 1048576, 00:09:11.116 "name": "malloc0" 00:09:11.116 }, 00:09:11.116 "method": "bdev_malloc_create" 00:09:11.116 }, 00:09:11.116 { 00:09:11.116 "params": { 00:09:11.116 "filename": "/dev/zram1", 00:09:11.117 "name": "uring0" 00:09:11.117 }, 00:09:11.117 "method": "bdev_uring_create" 00:09:11.117 }, 00:09:11.117 { 00:09:11.117 "method": "bdev_wait_for_examine" 00:09:11.117 } 00:09:11.117 ] 00:09:11.117 } 00:09:11.117 ] 00:09:11.117 } 00:09:11.375 [2024-12-09 03:58:53.067749] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:11.375 [2024-12-09 03:58:53.067909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61477 ] 00:09:11.375 [2024-12-09 03:58:53.218997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.375 [2024-12-09 03:58:53.281989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.633 [2024-12-09 03:58:53.358234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.006  [2024-12-09T03:58:55.887Z] Copying: 230/512 [MB] (230 MBps) [2024-12-09T03:58:55.887Z] Copying: 460/512 [MB] (229 MBps) [2024-12-09T03:58:56.453Z] Copying: 512/512 [MB] (average 230 MBps) 00:09:14.503 00:09:14.503 03:58:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:14.503 03:58:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:14.503 03:58:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:14.503 03:58:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:14.503 [2024-12-09 03:58:56.445894] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:14.503 [2024-12-09 03:58:56.446008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61527 ] 00:09:14.503 { 00:09:14.503 "subsystems": [ 00:09:14.503 { 00:09:14.503 "subsystem": "bdev", 00:09:14.503 "config": [ 00:09:14.503 { 00:09:14.503 "params": { 00:09:14.503 "block_size": 512, 00:09:14.503 "num_blocks": 1048576, 00:09:14.503 "name": "malloc0" 00:09:14.503 }, 00:09:14.503 "method": "bdev_malloc_create" 00:09:14.503 }, 00:09:14.503 { 00:09:14.503 "params": { 00:09:14.503 "filename": "/dev/zram1", 00:09:14.503 "name": "uring0" 00:09:14.503 }, 00:09:14.503 "method": "bdev_uring_create" 00:09:14.503 }, 00:09:14.503 { 00:09:14.503 "method": "bdev_wait_for_examine" 00:09:14.503 } 00:09:14.503 ] 00:09:14.503 } 00:09:14.503 ] 00:09:14.503 } 00:09:14.761 [2024-12-09 03:58:56.588595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.761 [2024-12-09 03:58:56.672284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.018 [2024-12-09 03:58:56.750861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.414  [2024-12-09T03:58:59.297Z] Copying: 183/512 [MB] (183 MBps) [2024-12-09T03:59:00.231Z] Copying: 353/512 [MB] (169 MBps) [2024-12-09T03:59:00.489Z] Copying: 512/512 [MB] (average 178 MBps) 00:09:18.539 00:09:18.539 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:18.539 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ uv7tt3l3hw7ht4dg2rs8hesikabsd5w7hs7itnk6sqyduff7qbsaigoxdfnn95d3f74re0urs88ur0kzh1l1ksoj1dnijk6gya40kpzxp2gq26fc0s5cgzdz2ve1ad5jmn2mlf75tws5dgrhowdxzo3slmgqei87ctgodmuxq7lbe7j4bxsk6seckiquayl9v7l9dex3ji9t6dmo9vl5zu4xk8annwx96veerlmnu63d45tlhjh6fxhgpz46ciwqjt7jydkmtxpxqy8c1tdmlbfk2mzichddinvlpsk64o64oft8kyehlq3mirar7ncgugly69g5k8tvkjpd474n2l234dwxmxekm91u0e1nfsz6tuk4u48l80y7oepq913f64i9dsh3e6n014sf7iftxn3d6l1u5jig1dkaf625iwufsylv6f4kv1igr4uysxjflzbz7pvox2o43fewxx13ooei778jqd2jfrhmdc94rd0ue1blqmwssjpwpgzmxaojpfusc8zmcg4j2bdc4jsnpmtm9um41pikes8ycke4m8adlfjdnujna7uith67aukn58o9eujh301oaeldbobp9i5atucs0k2blzxf54wjcex6zj153j291t5au5ei2nw99ra5jzzpbof2luuemnou0166f3ggsos5lkfqvx5nt7ugjm08ts9far0m0da99wuluqhfcene5z43hmspp03h27ne8i0xbqwdnenups5aq7a65veps9lm20edk6nhx791w2pjyfdrak2493n3aey099tuxy9g8xdo0wxa57ragupwcyirgliemq4fhhn2e8vweq95frhlndr3mlj2twr9qrq7zsbz8desvlhmcmhr1f1nneanbmmzqg72haujw65ed8j6yskg6ewtv6u1ow6u4hlaqlgk1n54af7upkh2sjclxrhel9pbe7c9gmxfpypnbvmu0cyy788crf40h26nxdgocrdijw3waidscvh8bxzul0yzouoeofah659ns416 == \u\v\7\t\t\3\l\3\h\w\7\h\t\4\d\g\2\r\s\8\h\e\s\i\k\a\b\s\d\5\w\7\h\s\7\i\t\n\k\6\s\q\y\d\u\f\f\7\q\b\s\a\i\g\o\x\d\f\n\n\9\5\d\3\f\7\4\r\e\0\u\r\s\8\8\u\r\0\k\z\h\1\l\1\k\s\o\j\1\d\n\i\j\k\6\g\y\a\4\0\k\p\z\x\p\2\g\q\2\6\f\c\0\s\5\c\g\z\d\z\2\v\e\1\a\d\5\j\m\n\2\m\l\f\7\5\t\w\s\5\d\g\r\h\o\w\d\x\z\o\3\s\l\m\g\q\e\i\8\7\c\t\g\o\d\m\u\x\q\7\l\b\e\7\j\4\b\x\s\k\6\s\e\c\k\i\q\u\a\y\l\9\v\7\l\9\d\e\x\3\j\i\9\t\6\d\m\o\9\v\l\5\z\u\4\x\k\8\a\n\n\w\x\9\6\v\e\e\r\l\m\n\u\6\3\d\4\5\t\l\h\j\h\6\f\x\h\g\p\z\4\6\c\i\w\q\j\t\7\j\y\d\k\m\t\x\p\x\q\y\8\c\1\t\d\m\l\b\f\k\2\m\z\i\c\h\d\d\i\n\v\l\p\s\k\6\4\o\6\4\o\f\t\8\k\y\e\h\l\q\3\m\i\r\a\r\7\n\c\g\u\g\l\y\6\9\g\5\k\8\t\v\k\j\p\d\4\7\4\n\2\l\2\3\4\d\w\x\m\x\e\k\m\9\1\u\0\e\1\n\f\s\z\6\t\u\k\4\u\4\8\l\8\0\y\7\o\e\p\q\9\1\3\f\6\4\i\9\d\s\h\3\e\6\n\0\1\4\s\f\7\i\f\t\x\n\3\d\6\l\1\u\5\j\i\g\1\d\k\a\f\6\2\5\i\w\u\f\s\y\l\v\6\f\4\k\v\1\i\g\r\4\u\y\s\x\j\f\l\z\b\z\7\p\v\o\x\2\o\4\3\f\e\w\x\x\1\3\o\o\e\i\7\7\8\j\q\d\2\j\f\r\h\m\d\c\9\4\r\d\0\u\e\1\b\l\q\m\w\s\s\j\p\w\p\g\z\m\x\a\o\j\p\f\u\s\c\8\z\m\c\g\4\j\2\b\d\c\4\j\s\n\p\m\t\m\9\u\m\4\1\p\i\k\e\s\8\y\c\k\e\4\m\8\a\d\l\f\j\d\n\u\j\n\a\7\u\i\t\h\6\7\a\u\k\n\5\8\o\9\e\u\j\h\3\0\1\o\a\e\l\d\b\o\b\p\9\i\5\a\t\u\c\s\0\k\2\b\l\z\x\f\5\4\w\j\c\e\x\6\z\j\1\5\3\j\2\9\1\t\5\a\u\5\e\i\2\n\w\9\9\r\a\5\j\z\z\p\b\o\f\2\l\u\u\e\m\n\o\u\0\1\6\6\f\3\g\g\s\o\s\5\l\k\f\q\v\x\5\n\t\7\u\g\j\m\0\8\t\s\9\f\a\r\0\m\0\d\a\9\9\w\u\l\u\q\h\f\c\e\n\e\5\z\4\3\h\m\s\p\p\0\3\h\2\7\n\e\8\i\0\x\b\q\w\d\n\e\n\u\p\s\5\a\q\7\a\6\5\v\e\p\s\9\l\m\2\0\e\d\k\6\n\h\x\7\9\1\w\2\p\j\y\f\d\r\a\k\2\4\9\3\n\3\a\e\y\0\9\9\t\u\x\y\9\g\8\x\d\o\0\w\x\a\5\7\r\a\g\u\p\w\c\y\i\r\g\l\i\e\m\q\4\f\h\h\n\2\e\8\v\w\e\q\9\5\f\r\h\l\n\d\r\3\m\l\j\2\t\w\r\9\q\r\q\7\z\s\b\z\8\d\e\s\v\l\h\m\c\m\h\r\1\f\1\n\n\e\a\n\b\m\m\z\q\g\7\2\h\a\u\j\w\6\5\e\d\8\j\6\y\s\k\g\6\e\w\t\v\6\u\1\o\w\6\u\4\h\l\a\q\l\g\k\1\n\5\4\a\f\7\u\p\k\h\2\s\j\c\l\x\r\h\e\l\9\p\b\e\7\c\9\g\m\x\f\p\y\p\n\b\v\m\u\0\c\y\y\7\8\8\c\r\f\4\0\h\2\6\n\x\d\g\o\c\r\d\i\j\w\3\w\a\i\d\s\c\v\h\8\b\x\z\u\l\0\y\z\o\u\o\e\o\f\a\h\6\5\9\n\s\4\1\6 ]] 00:09:18.539 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:18.539 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ uv7tt3l3hw7ht4dg2rs8hesikabsd5w7hs7itnk6sqyduff7qbsaigoxdfnn95d3f74re0urs88ur0kzh1l1ksoj1dnijk6gya40kpzxp2gq26fc0s5cgzdz2ve1ad5jmn2mlf75tws5dgrhowdxzo3slmgqei87ctgodmuxq7lbe7j4bxsk6seckiquayl9v7l9dex3ji9t6dmo9vl5zu4xk8annwx96veerlmnu63d45tlhjh6fxhgpz46ciwqjt7jydkmtxpxqy8c1tdmlbfk2mzichddinvlpsk64o64oft8kyehlq3mirar7ncgugly69g5k8tvkjpd474n2l234dwxmxekm91u0e1nfsz6tuk4u48l80y7oepq913f64i9dsh3e6n014sf7iftxn3d6l1u5jig1dkaf625iwufsylv6f4kv1igr4uysxjflzbz7pvox2o43fewxx13ooei778jqd2jfrhmdc94rd0ue1blqmwssjpwpgzmxaojpfusc8zmcg4j2bdc4jsnpmtm9um41pikes8ycke4m8adlfjdnujna7uith67aukn58o9eujh301oaeldbobp9i5atucs0k2blzxf54wjcex6zj153j291t5au5ei2nw99ra5jzzpbof2luuemnou0166f3ggsos5lkfqvx5nt7ugjm08ts9far0m0da99wuluqhfcene5z43hmspp03h27ne8i0xbqwdnenups5aq7a65veps9lm20edk6nhx791w2pjyfdrak2493n3aey099tuxy9g8xdo0wxa57ragupwcyirgliemq4fhhn2e8vweq95frhlndr3mlj2twr9qrq7zsbz8desvlhmcmhr1f1nneanbmmzqg72haujw65ed8j6yskg6ewtv6u1ow6u4hlaqlgk1n54af7upkh2sjclxrhel9pbe7c9gmxfpypnbvmu0cyy788crf40h26nxdgocrdijw3waidscvh8bxzul0yzouoeofah659ns416 == \u\v\7\t\t\3\l\3\h\w\7\h\t\4\d\g\2\r\s\8\h\e\s\i\k\a\b\s\d\5\w\7\h\s\7\i\t\n\k\6\s\q\y\d\u\f\f\7\q\b\s\a\i\g\o\x\d\f\n\n\9\5\d\3\f\7\4\r\e\0\u\r\s\8\8\u\r\0\k\z\h\1\l\1\k\s\o\j\1\d\n\i\j\k\6\g\y\a\4\0\k\p\z\x\p\2\g\q\2\6\f\c\0\s\5\c\g\z\d\z\2\v\e\1\a\d\5\j\m\n\2\m\l\f\7\5\t\w\s\5\d\g\r\h\o\w\d\x\z\o\3\s\l\m\g\q\e\i\8\7\c\t\g\o\d\m\u\x\q\7\l\b\e\7\j\4\b\x\s\k\6\s\e\c\k\i\q\u\a\y\l\9\v\7\l\9\d\e\x\3\j\i\9\t\6\d\m\o\9\v\l\5\z\u\4\x\k\8\a\n\n\w\x\9\6\v\e\e\r\l\m\n\u\6\3\d\4\5\t\l\h\j\h\6\f\x\h\g\p\z\4\6\c\i\w\q\j\t\7\j\y\d\k\m\t\x\p\x\q\y\8\c\1\t\d\m\l\b\f\k\2\m\z\i\c\h\d\d\i\n\v\l\p\s\k\6\4\o\6\4\o\f\t\8\k\y\e\h\l\q\3\m\i\r\a\r\7\n\c\g\u\g\l\y\6\9\g\5\k\8\t\v\k\j\p\d\4\7\4\n\2\l\2\3\4\d\w\x\m\x\e\k\m\9\1\u\0\e\1\n\f\s\z\6\t\u\k\4\u\4\8\l\8\0\y\7\o\e\p\q\9\1\3\f\6\4\i\9\d\s\h\3\e\6\n\0\1\4\s\f\7\i\f\t\x\n\3\d\6\l\1\u\5\j\i\g\1\d\k\a\f\6\2\5\i\w\u\f\s\y\l\v\6\f\4\k\v\1\i\g\r\4\u\y\s\x\j\f\l\z\b\z\7\p\v\o\x\2\o\4\3\f\e\w\x\x\1\3\o\o\e\i\7\7\8\j\q\d\2\j\f\r\h\m\d\c\9\4\r\d\0\u\e\1\b\l\q\m\w\s\s\j\p\w\p\g\z\m\x\a\o\j\p\f\u\s\c\8\z\m\c\g\4\j\2\b\d\c\4\j\s\n\p\m\t\m\9\u\m\4\1\p\i\k\e\s\8\y\c\k\e\4\m\8\a\d\l\f\j\d\n\u\j\n\a\7\u\i\t\h\6\7\a\u\k\n\5\8\o\9\e\u\j\h\3\0\1\o\a\e\l\d\b\o\b\p\9\i\5\a\t\u\c\s\0\k\2\b\l\z\x\f\5\4\w\j\c\e\x\6\z\j\1\5\3\j\2\9\1\t\5\a\u\5\e\i\2\n\w\9\9\r\a\5\j\z\z\p\b\o\f\2\l\u\u\e\m\n\o\u\0\1\6\6\f\3\g\g\s\o\s\5\l\k\f\q\v\x\5\n\t\7\u\g\j\m\0\8\t\s\9\f\a\r\0\m\0\d\a\9\9\w\u\l\u\q\h\f\c\e\n\e\5\z\4\3\h\m\s\p\p\0\3\h\2\7\n\e\8\i\0\x\b\q\w\d\n\e\n\u\p\s\5\a\q\7\a\6\5\v\e\p\s\9\l\m\2\0\e\d\k\6\n\h\x\7\9\1\w\2\p\j\y\f\d\r\a\k\2\4\9\3\n\3\a\e\y\0\9\9\t\u\x\y\9\g\8\x\d\o\0\w\x\a\5\7\r\a\g\u\p\w\c\y\i\r\g\l\i\e\m\q\4\f\h\h\n\2\e\8\v\w\e\q\9\5\f\r\h\l\n\d\r\3\m\l\j\2\t\w\r\9\q\r\q\7\z\s\b\z\8\d\e\s\v\l\h\m\c\m\h\r\1\f\1\n\n\e\a\n\b\m\m\z\q\g\7\2\h\a\u\j\w\6\5\e\d\8\j\6\y\s\k\g\6\e\w\t\v\6\u\1\o\w\6\u\4\h\l\a\q\l\g\k\1\n\5\4\a\f\7\u\p\k\h\2\s\j\c\l\x\r\h\e\l\9\p\b\e\7\c\9\g\m\x\f\p\y\p\n\b\v\m\u\0\c\y\y\7\8\8\c\r\f\4\0\h\2\6\n\x\d\g\o\c\r\d\i\j\w\3\w\a\i\d\s\c\v\h\8\b\x\z\u\l\0\y\z\o\u\o\e\o\f\a\h\6\5\9\n\s\4\1\6 ]] 00:09:18.539 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:19.105 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:19.105 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:19.105 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:19.105 03:59:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:19.105 { 00:09:19.105 "subsystems": [ 00:09:19.105 { 00:09:19.105 "subsystem": "bdev", 00:09:19.105 "config": [ 00:09:19.105 { 00:09:19.105 "params": { 00:09:19.105 "block_size": 512, 00:09:19.105 "num_blocks": 1048576, 00:09:19.105 "name": "malloc0" 00:09:19.105 }, 00:09:19.105 "method": "bdev_malloc_create" 00:09:19.105 }, 00:09:19.105 { 00:09:19.105 "params": { 00:09:19.105 "filename": "/dev/zram1", 00:09:19.105 "name": "uring0" 00:09:19.105 }, 00:09:19.105 "method": "bdev_uring_create" 00:09:19.105 }, 00:09:19.105 { 00:09:19.105 "method": "bdev_wait_for_examine" 00:09:19.105 } 00:09:19.105 ] 00:09:19.105 } 00:09:19.105 ] 00:09:19.105 } 00:09:19.105 [2024-12-09 03:59:00.927109] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:19.105 [2024-12-09 03:59:00.927244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61595 ] 00:09:19.362 [2024-12-09 03:59:01.081790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.362 [2024-12-09 03:59:01.155756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.362 [2024-12-09 03:59:01.235474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.742  [2024-12-09T03:59:03.628Z] Copying: 142/512 [MB] (142 MBps) [2024-12-09T03:59:04.564Z] Copying: 292/512 [MB] (149 MBps) [2024-12-09T03:59:05.130Z] Copying: 439/512 [MB] (147 MBps) [2024-12-09T03:59:05.696Z] Copying: 512/512 [MB] (average 146 MBps) 00:09:23.746 00:09:23.746 03:59:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:23.746 03:59:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:23.746 03:59:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:23.746 03:59:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:23.746 03:59:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:23.746 03:59:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:23.746 03:59:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:23.746 03:59:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:23.746 { 00:09:23.746 "subsystems": [ 00:09:23.746 { 00:09:23.746 "subsystem": "bdev", 00:09:23.746 "config": [ 00:09:23.746 { 00:09:23.746 "params": { 00:09:23.746 "block_size": 512, 00:09:23.746 "num_blocks": 1048576, 00:09:23.746 "name": "malloc0" 00:09:23.746 }, 00:09:23.746 "method": "bdev_malloc_create" 00:09:23.746 }, 00:09:23.746 { 00:09:23.746 "params": { 00:09:23.746 "filename": "/dev/zram1", 00:09:23.746 "name": "uring0" 00:09:23.746 }, 00:09:23.746 "method": "bdev_uring_create" 00:09:23.746 }, 00:09:23.746 { 00:09:23.746 "params": { 00:09:23.746 "name": "uring0" 00:09:23.746 }, 00:09:23.746 "method": "bdev_uring_delete" 00:09:23.746 }, 00:09:23.746 { 00:09:23.746 "method": "bdev_wait_for_examine" 00:09:23.746 } 00:09:23.746 ] 00:09:23.746 } 00:09:23.746 ] 00:09:23.746 } 00:09:23.746 [2024-12-09 03:59:05.641968] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:23.746 [2024-12-09 03:59:05.642117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61667 ] 00:09:24.004 [2024-12-09 03:59:05.798139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.004 [2024-12-09 03:59:05.873264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.262 [2024-12-09 03:59:05.953639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.520  [2024-12-09T03:59:07.036Z] Copying: 0/0 [B] (average 0 Bps) 00:09:25.086 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.086 03:59:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:25.086 [2024-12-09 03:59:06.866365] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:25.086 [2024-12-09 03:59:06.866479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:09:25.086 { 00:09:25.086 "subsystems": [ 00:09:25.086 { 00:09:25.086 "subsystem": "bdev", 00:09:25.086 "config": [ 00:09:25.086 { 00:09:25.086 "params": { 00:09:25.086 "block_size": 512, 00:09:25.086 "num_blocks": 1048576, 00:09:25.086 "name": "malloc0" 00:09:25.086 }, 00:09:25.086 "method": "bdev_malloc_create" 00:09:25.086 }, 00:09:25.086 { 00:09:25.086 "params": { 00:09:25.086 "filename": "/dev/zram1", 00:09:25.086 "name": "uring0" 00:09:25.086 }, 00:09:25.086 "method": "bdev_uring_create" 00:09:25.086 }, 00:09:25.086 { 00:09:25.086 "params": { 00:09:25.086 "name": "uring0" 00:09:25.086 }, 00:09:25.086 "method": "bdev_uring_delete" 00:09:25.086 }, 00:09:25.086 { 00:09:25.086 "method": "bdev_wait_for_examine" 00:09:25.086 } 00:09:25.086 ] 00:09:25.086 } 00:09:25.086 ] 00:09:25.086 } 00:09:25.086 [2024-12-09 03:59:07.019718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.345 [2024-12-09 03:59:07.110368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.345 [2024-12-09 03:59:07.194900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.602 [2024-12-09 03:59:07.487683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:25.602 [2024-12-09 03:59:07.487745] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:25.602 [2024-12-09 03:59:07.487772] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:09:25.602 [2024-12-09 03:59:07.487783] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.168 [2024-12-09 03:59:07.997415] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:26.168 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:26.736 00:09:26.736 real 0m17.037s 00:09:26.736 user 0m11.376s 00:09:26.736 sys 0m13.913s 00:09:26.736 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.736 03:59:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:26.736 ************************************ 00:09:26.736 END TEST dd_uring_copy 00:09:26.736 ************************************ 00:09:26.736 00:09:26.736 real 0m17.288s 00:09:26.736 user 0m11.515s 00:09:26.736 sys 0m14.028s 00:09:26.736 03:59:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.736 03:59:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:26.736 ************************************ 00:09:26.736 END TEST spdk_dd_uring 00:09:26.736 ************************************ 00:09:26.736 03:59:08 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:26.736 03:59:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.736 03:59:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.736 03:59:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:26.736 ************************************ 00:09:26.736 START TEST spdk_dd_sparse 00:09:26.736 ************************************ 00:09:26.736 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:26.736 * Looking for test storage... 00:09:26.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:26.736 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.736 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.736 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.996 --rc genhtml_branch_coverage=1 00:09:26.996 --rc genhtml_function_coverage=1 00:09:26.996 --rc genhtml_legend=1 00:09:26.996 --rc geninfo_all_blocks=1 00:09:26.996 --rc geninfo_unexecuted_blocks=1 00:09:26.996 00:09:26.996 ' 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.996 --rc genhtml_branch_coverage=1 00:09:26.996 --rc genhtml_function_coverage=1 00:09:26.996 --rc genhtml_legend=1 00:09:26.996 --rc geninfo_all_blocks=1 00:09:26.996 --rc geninfo_unexecuted_blocks=1 00:09:26.996 00:09:26.996 ' 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.996 --rc genhtml_branch_coverage=1 00:09:26.996 --rc genhtml_function_coverage=1 00:09:26.996 --rc genhtml_legend=1 00:09:26.996 --rc geninfo_all_blocks=1 00:09:26.996 --rc geninfo_unexecuted_blocks=1 00:09:26.996 00:09:26.996 ' 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.996 --rc genhtml_branch_coverage=1 00:09:26.996 --rc genhtml_function_coverage=1 00:09:26.996 --rc genhtml_legend=1 00:09:26.996 --rc geninfo_all_blocks=1 00:09:26.996 --rc geninfo_unexecuted_blocks=1 00:09:26.996 00:09:26.996 ' 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:26.996 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:26.997 1+0 records in 00:09:26.997 1+0 records out 00:09:26.997 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00844707 s, 497 MB/s 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:26.997 1+0 records in 00:09:26.997 1+0 records out 00:09:26.997 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00686487 s, 611 MB/s 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:26.997 1+0 records in 00:09:26.997 1+0 records out 00:09:26.997 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00911143 s, 460 MB/s 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:26.997 ************************************ 00:09:26.997 START TEST dd_sparse_file_to_file 00:09:26.997 ************************************ 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:26.997 03:59:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:26.997 [2024-12-09 03:59:08.833641] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:26.997 [2024-12-09 03:59:08.833797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61803 ] 00:09:26.997 { 00:09:26.997 "subsystems": [ 00:09:26.997 { 00:09:26.997 "subsystem": "bdev", 00:09:26.997 "config": [ 00:09:26.997 { 00:09:26.997 "params": { 00:09:26.997 "block_size": 4096, 00:09:26.997 "filename": "dd_sparse_aio_disk", 00:09:26.997 "name": "dd_aio" 00:09:26.997 }, 00:09:26.997 "method": "bdev_aio_create" 00:09:26.997 }, 00:09:26.997 { 00:09:26.997 "params": { 00:09:26.997 "lvs_name": "dd_lvstore", 00:09:26.997 "bdev_name": "dd_aio" 00:09:26.997 }, 00:09:26.997 "method": "bdev_lvol_create_lvstore" 00:09:26.997 }, 00:09:26.997 { 00:09:26.997 "method": "bdev_wait_for_examine" 00:09:26.997 } 00:09:26.997 ] 00:09:26.997 } 00:09:26.997 ] 00:09:26.997 } 00:09:27.255 [2024-12-09 03:59:08.977750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.255 [2024-12-09 03:59:09.066580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.255 [2024-12-09 03:59:09.149770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.513  [2024-12-09T03:59:09.722Z] Copying: 12/36 [MB] (average 857 MBps) 00:09:27.772 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:27.772 00:09:27.772 real 0m0.818s 00:09:27.772 user 0m0.513s 00:09:27.772 sys 0m0.475s 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:27.772 ************************************ 00:09:27.772 END TEST dd_sparse_file_to_file 00:09:27.772 ************************************ 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:27.772 ************************************ 00:09:27.772 START TEST dd_sparse_file_to_bdev 00:09:27.772 ************************************ 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:27.772 03:59:09 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:27.772 { 00:09:27.772 "subsystems": [ 00:09:27.772 { 00:09:27.772 "subsystem": "bdev", 00:09:27.772 "config": [ 00:09:27.772 { 00:09:27.772 "params": { 00:09:27.772 "block_size": 4096, 00:09:27.772 "filename": "dd_sparse_aio_disk", 00:09:27.772 "name": "dd_aio" 00:09:27.772 }, 00:09:27.772 "method": "bdev_aio_create" 00:09:27.772 }, 00:09:27.772 { 00:09:27.772 "params": { 00:09:27.772 "lvs_name": "dd_lvstore", 00:09:27.772 "lvol_name": "dd_lvol", 00:09:27.772 "size_in_mib": 36, 00:09:27.772 "thin_provision": true 00:09:27.772 }, 00:09:27.772 "method": "bdev_lvol_create" 00:09:27.772 }, 00:09:27.772 { 00:09:27.772 "method": "bdev_wait_for_examine" 00:09:27.772 } 00:09:27.773 ] 00:09:27.773 } 00:09:27.773 ] 00:09:27.773 } 00:09:27.773 [2024-12-09 03:59:09.713712] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:27.773 [2024-12-09 03:59:09.713826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61851 ] 00:09:28.124 [2024-12-09 03:59:09.864617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.124 [2024-12-09 03:59:09.944266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.385 [2024-12-09 03:59:10.024790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.385  [2024-12-09T03:59:10.595Z] Copying: 12/36 [MB] (average 480 MBps) 00:09:28.645 00:09:28.645 00:09:28.645 real 0m0.773s 00:09:28.645 user 0m0.495s 00:09:28.645 sys 0m0.447s 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:28.645 ************************************ 00:09:28.645 END TEST dd_sparse_file_to_bdev 00:09:28.645 ************************************ 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:28.645 ************************************ 00:09:28.645 START TEST dd_sparse_bdev_to_file 00:09:28.645 ************************************ 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:28.645 03:59:10 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:28.645 [2024-12-09 03:59:10.523181] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:28.645 [2024-12-09 03:59:10.523277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61878 ] 00:09:28.645 { 00:09:28.645 "subsystems": [ 00:09:28.645 { 00:09:28.645 "subsystem": "bdev", 00:09:28.645 "config": [ 00:09:28.645 { 00:09:28.645 "params": { 00:09:28.645 "block_size": 4096, 00:09:28.645 "filename": "dd_sparse_aio_disk", 00:09:28.645 "name": "dd_aio" 00:09:28.645 }, 00:09:28.645 "method": "bdev_aio_create" 00:09:28.645 }, 00:09:28.645 { 00:09:28.645 "method": "bdev_wait_for_examine" 00:09:28.645 } 00:09:28.645 ] 00:09:28.645 } 00:09:28.645 ] 00:09:28.645 } 00:09:28.903 [2024-12-09 03:59:10.661586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.904 [2024-12-09 03:59:10.739832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.904 [2024-12-09 03:59:10.816429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.162  [2024-12-09T03:59:11.370Z] Copying: 12/36 [MB] (average 923 MBps) 00:09:29.420 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:29.420 00:09:29.420 real 0m0.758s 00:09:29.420 user 0m0.479s 00:09:29.420 sys 0m0.437s 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:29.420 ************************************ 00:09:29.420 END TEST dd_sparse_bdev_to_file 00:09:29.420 ************************************ 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:29.420 00:09:29.420 real 0m2.786s 00:09:29.420 user 0m1.666s 00:09:29.420 sys 0m1.612s 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.420 ************************************ 00:09:29.420 03:59:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:29.420 END TEST spdk_dd_sparse 00:09:29.420 ************************************ 00:09:29.420 03:59:11 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:29.420 03:59:11 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.420 03:59:11 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.420 03:59:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:29.420 ************************************ 00:09:29.420 START TEST spdk_dd_negative 00:09:29.420 ************************************ 00:09:29.420 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:29.680 * Looking for test storage... 00:09:29.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.680 --rc genhtml_branch_coverage=1 00:09:29.680 --rc genhtml_function_coverage=1 00:09:29.680 --rc genhtml_legend=1 00:09:29.680 --rc geninfo_all_blocks=1 00:09:29.680 --rc geninfo_unexecuted_blocks=1 00:09:29.680 00:09:29.680 ' 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.680 --rc genhtml_branch_coverage=1 00:09:29.680 --rc genhtml_function_coverage=1 00:09:29.680 --rc genhtml_legend=1 00:09:29.680 --rc geninfo_all_blocks=1 00:09:29.680 --rc geninfo_unexecuted_blocks=1 00:09:29.680 00:09:29.680 ' 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.680 --rc genhtml_branch_coverage=1 00:09:29.680 --rc genhtml_function_coverage=1 00:09:29.680 --rc genhtml_legend=1 00:09:29.680 --rc geninfo_all_blocks=1 00:09:29.680 --rc geninfo_unexecuted_blocks=1 00:09:29.680 00:09:29.680 ' 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.680 --rc genhtml_branch_coverage=1 00:09:29.680 --rc genhtml_function_coverage=1 00:09:29.680 --rc genhtml_legend=1 00:09:29.680 --rc geninfo_all_blocks=1 00:09:29.680 --rc geninfo_unexecuted_blocks=1 00:09:29.680 00:09:29.680 ' 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.680 ************************************ 00:09:29.680 START TEST dd_invalid_arguments 00:09:29.680 ************************************ 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.680 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:29.680 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:29.680 00:09:29.680 CPU options: 00:09:29.680 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:29.680 (like [0,1,10]) 00:09:29.680 --lcores lcore to CPU mapping list. The list is in the format: 00:09:29.680 [<,lcores[@CPUs]>...] 00:09:29.680 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:29.680 Within the group, '-' is used for range separator, 00:09:29.680 ',' is used for single number separator. 00:09:29.680 '( )' can be omitted for single element group, 00:09:29.680 '@' can be omitted if cpus and lcores have the same value 00:09:29.680 --disable-cpumask-locks Disable CPU core lock files. 00:09:29.680 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:29.680 pollers in the app support interrupt mode) 00:09:29.680 -p, --main-core main (primary) core for DPDK 00:09:29.680 00:09:29.680 Configuration options: 00:09:29.680 -c, --config, --json JSON config file 00:09:29.680 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:29.680 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:29.680 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:29.680 --rpcs-allowed comma-separated list of permitted RPCS 00:09:29.680 --json-ignore-init-errors don't exit on invalid config entry 00:09:29.680 00:09:29.680 Memory options: 00:09:29.680 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:29.680 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:29.680 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:29.680 -R, --huge-unlink unlink huge files after initialization 00:09:29.680 -n, --mem-channels number of memory channels used for DPDK 00:09:29.680 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:29.680 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:29.680 --no-huge run without using hugepages 00:09:29.680 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:29.680 -i, --shm-id shared memory ID (optional) 00:09:29.680 -g, --single-file-segments force creating just one hugetlbfs file 00:09:29.680 00:09:29.680 PCI options: 00:09:29.680 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:29.680 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:29.680 -u, --no-pci disable PCI access 00:09:29.680 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:29.680 00:09:29.680 Log options: 00:09:29.680 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:29.680 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:29.680 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:29.680 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:29.680 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:09:29.680 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:09:29.680 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:09:29.680 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:09:29.680 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:09:29.680 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:09:29.680 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:29.680 --silence-noticelog disable notice level logging to stderr 00:09:29.680 00:09:29.680 Trace options: 00:09:29.680 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:29.680 setting 0 to disable trace (default 32768) 00:09:29.680 Tracepoints vary in size and can use more than one trace entry. 00:09:29.680 -e, --tpoint-group [:] 00:09:29.680 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:29.680 [2024-12-09 03:59:11.601831] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:09:29.680 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:29.680 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:09:29.680 bdev_raid, scheduler, all). 00:09:29.680 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:29.680 a tracepoint group. First tpoint inside a group can be enabled by 00:09:29.680 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:29.680 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:29.680 in /include/spdk_internal/trace_defs.h 00:09:29.680 00:09:29.680 Other options: 00:09:29.680 -h, --help show this usage 00:09:29.680 -v, --version print SPDK version 00:09:29.680 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:29.680 --env-context Opaque context for use of the env implementation 00:09:29.680 00:09:29.680 Application specific: 00:09:29.680 [--------- DD Options ---------] 00:09:29.680 --if Input file. Must specify either --if or --ib. 00:09:29.681 --ib Input bdev. Must specifier either --if or --ib 00:09:29.681 --of Output file. Must specify either --of or --ob. 00:09:29.681 --ob Output bdev. Must specify either --of or --ob. 00:09:29.681 --iflag Input file flags. 00:09:29.681 --oflag Output file flags. 00:09:29.681 --bs I/O unit size (default: 4096) 00:09:29.681 --qd Queue depth (default: 2) 00:09:29.681 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:29.681 --skip Skip this many I/O units at start of input. (default: 0) 00:09:29.681 --seek Skip this many I/O units at start of output. (default: 0) 00:09:29.681 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:29.681 --sparse Enable hole skipping in input target 00:09:29.681 Available iflag and oflag values: 00:09:29.681 append - append mode 00:09:29.681 direct - use direct I/O for data 00:09:29.681 directory - fail unless a directory 00:09:29.681 dsync - use synchronized I/O for data 00:09:29.681 noatime - do not update access time 00:09:29.681 noctty - do not assign controlling terminal from file 00:09:29.681 nofollow - do not follow symlinks 00:09:29.681 nonblock - use non-blocking I/O 00:09:29.681 sync - use synchronized I/O for data and metadata 00:09:29.681 ************************************ 00:09:29.681 END TEST dd_invalid_arguments 00:09:29.681 ************************************ 00:09:29.681 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:29.681 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.681 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.681 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.681 00:09:29.681 real 0m0.069s 00:09:29.681 user 0m0.043s 00:09:29.681 sys 0m0.025s 00:09:29.681 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.681 03:59:11 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.949 ************************************ 00:09:29.949 START TEST dd_double_input 00:09:29.949 ************************************ 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.949 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:29.950 [2024-12-09 03:59:11.726813] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.950 00:09:29.950 real 0m0.079s 00:09:29.950 user 0m0.049s 00:09:29.950 sys 0m0.028s 00:09:29.950 ************************************ 00:09:29.950 END TEST dd_double_input 00:09:29.950 ************************************ 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.950 ************************************ 00:09:29.950 START TEST dd_double_output 00:09:29.950 ************************************ 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:29.950 [2024-12-09 03:59:11.853882] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.950 00:09:29.950 real 0m0.079s 00:09:29.950 user 0m0.053s 00:09:29.950 sys 0m0.026s 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.950 ************************************ 00:09:29.950 END TEST dd_double_output 00:09:29.950 ************************************ 00:09:29.950 03:59:11 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.209 ************************************ 00:09:30.209 START TEST dd_no_input 00:09:30.209 ************************************ 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.209 03:59:11 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:30.209 [2024-12-09 03:59:11.988480] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.209 ************************************ 00:09:30.209 END TEST dd_no_input 00:09:30.209 ************************************ 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.209 00:09:30.209 real 0m0.083s 00:09:30.209 user 0m0.050s 00:09:30.209 sys 0m0.031s 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.209 ************************************ 00:09:30.209 START TEST dd_no_output 00:09:30.209 ************************************ 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:30.209 [2024-12-09 03:59:12.125744] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.209 00:09:30.209 real 0m0.081s 00:09:30.209 user 0m0.052s 00:09:30.209 sys 0m0.028s 00:09:30.209 ************************************ 00:09:30.209 END TEST dd_no_output 00:09:30.209 ************************************ 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.209 03:59:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.468 ************************************ 00:09:30.468 START TEST dd_wrong_blocksize 00:09:30.468 ************************************ 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:30.468 [2024-12-09 03:59:12.256215] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.468 00:09:30.468 real 0m0.078s 00:09:30.468 user 0m0.051s 00:09:30.468 sys 0m0.026s 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.468 ************************************ 00:09:30.468 END TEST dd_wrong_blocksize 00:09:30.468 ************************************ 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.468 ************************************ 00:09:30.468 START TEST dd_smaller_blocksize 00:09:30.468 ************************************ 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.468 03:59:12 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:30.468 [2024-12-09 03:59:12.391447] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:30.468 [2024-12-09 03:59:12.391555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62110 ] 00:09:30.727 [2024-12-09 03:59:12.545736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.727 [2024-12-09 03:59:12.634239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.984 [2024-12-09 03:59:12.713806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.242 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:31.499 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:31.499 [2024-12-09 03:59:13.432616] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:31.499 [2024-12-09 03:59:13.432700] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:31.757 [2024-12-09 03:59:13.612749] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:31.757 03:59:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:31.757 03:59:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.757 03:59:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:31.757 03:59:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:31.757 03:59:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:31.757 03:59:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.757 00:09:31.757 real 0m1.370s 00:09:31.757 user 0m0.519s 00:09:31.757 sys 0m0.740s 00:09:31.757 ************************************ 00:09:31.757 END TEST dd_smaller_blocksize 00:09:31.757 ************************************ 00:09:31.757 03:59:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.757 03:59:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:32.016 ************************************ 00:09:32.016 START TEST dd_invalid_count 00:09:32.016 ************************************ 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:32.016 [2024-12-09 03:59:13.816973] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.016 00:09:32.016 real 0m0.081s 00:09:32.016 user 0m0.044s 00:09:32.016 sys 0m0.036s 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:32.016 ************************************ 00:09:32.016 END TEST dd_invalid_count 00:09:32.016 ************************************ 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:32.016 ************************************ 00:09:32.016 START TEST dd_invalid_oflag 00:09:32.016 ************************************ 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:32.016 [2024-12-09 03:59:13.944229] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.016 00:09:32.016 real 0m0.073s 00:09:32.016 user 0m0.042s 00:09:32.016 sys 0m0.030s 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.016 03:59:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:32.016 ************************************ 00:09:32.016 END TEST dd_invalid_oflag 00:09:32.016 ************************************ 00:09:32.275 03:59:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:32.276 ************************************ 00:09:32.276 START TEST dd_invalid_iflag 00:09:32.276 ************************************ 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:32.276 [2024-12-09 03:59:14.068541] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.276 ************************************ 00:09:32.276 END TEST dd_invalid_iflag 00:09:32.276 ************************************ 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.276 00:09:32.276 real 0m0.068s 00:09:32.276 user 0m0.039s 00:09:32.276 sys 0m0.027s 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:32.276 ************************************ 00:09:32.276 START TEST dd_unknown_flag 00:09:32.276 ************************************ 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.276 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:32.276 [2024-12-09 03:59:14.196997] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:32.276 [2024-12-09 03:59:14.197132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ] 00:09:32.534 [2024-12-09 03:59:14.340219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.534 [2024-12-09 03:59:14.421921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.793 [2024-12-09 03:59:14.506728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.793 [2024-12-09 03:59:14.565020] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:32.793 [2024-12-09 03:59:14.565118] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.793 [2024-12-09 03:59:14.565210] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:32.793 [2024-12-09 03:59:14.565227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.793 [2024-12-09 03:59:14.565541] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:32.793 [2024-12-09 03:59:14.565558] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.793 [2024-12-09 03:59:14.565619] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:32.793 [2024-12-09 03:59:14.565630] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:33.052 [2024-12-09 03:59:14.750350] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.052 00:09:33.052 real 0m0.701s 00:09:33.052 user 0m0.400s 00:09:33.052 sys 0m0.204s 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.052 ************************************ 00:09:33.052 END TEST dd_unknown_flag 00:09:33.052 ************************************ 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.052 ************************************ 00:09:33.052 START TEST dd_invalid_json 00:09:33.052 ************************************ 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.052 03:59:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:33.052 [2024-12-09 03:59:14.951644] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:33.052 [2024-12-09 03:59:14.951760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62246 ] 00:09:33.310 [2024-12-09 03:59:15.089874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.310 [2024-12-09 03:59:15.166497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.310 [2024-12-09 03:59:15.166643] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:33.310 [2024-12-09 03:59:15.166666] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:33.310 [2024-12-09 03:59:15.166677] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:33.310 [2024-12-09 03:59:15.166721] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.568 00:09:33.568 real 0m0.366s 00:09:33.568 user 0m0.193s 00:09:33.568 sys 0m0.070s 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.568 ************************************ 00:09:33.568 END TEST dd_invalid_json 00:09:33.568 ************************************ 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.568 ************************************ 00:09:33.568 START TEST dd_invalid_seek 00:09:33.568 ************************************ 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:33.568 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.569 03:59:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:33.569 [2024-12-09 03:59:15.380003] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:33.569 [2024-12-09 03:59:15.380118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62271 ] 00:09:33.569 { 00:09:33.569 "subsystems": [ 00:09:33.569 { 00:09:33.569 "subsystem": "bdev", 00:09:33.569 "config": [ 00:09:33.569 { 00:09:33.569 "params": { 00:09:33.569 "block_size": 512, 00:09:33.569 "num_blocks": 512, 00:09:33.569 "name": "malloc0" 00:09:33.569 }, 00:09:33.569 "method": "bdev_malloc_create" 00:09:33.569 }, 00:09:33.569 { 00:09:33.569 "params": { 00:09:33.569 "block_size": 512, 00:09:33.569 "num_blocks": 512, 00:09:33.569 "name": "malloc1" 00:09:33.569 }, 00:09:33.569 "method": "bdev_malloc_create" 00:09:33.569 }, 00:09:33.569 { 00:09:33.569 "method": "bdev_wait_for_examine" 00:09:33.569 } 00:09:33.569 ] 00:09:33.569 } 00:09:33.569 ] 00:09:33.569 } 00:09:33.827 [2024-12-09 03:59:15.518736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.827 [2024-12-09 03:59:15.594017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.827 [2024-12-09 03:59:15.675475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.827 [2024-12-09 03:59:15.759369] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:33.827 [2024-12-09 03:59:15.759455] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.086 [2024-12-09 03:59:15.956305] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.345 00:09:34.345 real 0m0.730s 00:09:34.345 user 0m0.470s 00:09:34.345 sys 0m0.219s 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.345 ************************************ 00:09:34.345 END TEST dd_invalid_seek 00:09:34.345 ************************************ 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.345 ************************************ 00:09:34.345 START TEST dd_invalid_skip 00:09:34.345 ************************************ 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:34.345 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.346 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:34.346 { 00:09:34.346 "subsystems": [ 00:09:34.346 { 00:09:34.346 "subsystem": "bdev", 00:09:34.346 "config": [ 00:09:34.346 { 00:09:34.346 "params": { 00:09:34.346 "block_size": 512, 00:09:34.346 "num_blocks": 512, 00:09:34.346 "name": "malloc0" 00:09:34.346 }, 00:09:34.346 "method": "bdev_malloc_create" 00:09:34.346 }, 00:09:34.346 { 00:09:34.346 "params": { 00:09:34.346 "block_size": 512, 00:09:34.346 "num_blocks": 512, 00:09:34.346 "name": "malloc1" 00:09:34.346 }, 00:09:34.346 "method": "bdev_malloc_create" 00:09:34.346 }, 00:09:34.346 { 00:09:34.346 "method": "bdev_wait_for_examine" 00:09:34.346 } 00:09:34.346 ] 00:09:34.346 } 00:09:34.346 ] 00:09:34.346 } 00:09:34.346 [2024-12-09 03:59:16.177663] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:34.346 [2024-12-09 03:59:16.177765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62310 ] 00:09:34.604 [2024-12-09 03:59:16.329502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.604 [2024-12-09 03:59:16.406642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.604 [2024-12-09 03:59:16.493773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.880 [2024-12-09 03:59:16.575513] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:34.880 [2024-12-09 03:59:16.575585] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.880 [2024-12-09 03:59:16.754771] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.139 00:09:35.139 real 0m0.735s 00:09:35.139 user 0m0.464s 00:09:35.139 sys 0m0.231s 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.139 ************************************ 00:09:35.139 END TEST dd_invalid_skip 00:09:35.139 ************************************ 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.139 ************************************ 00:09:35.139 START TEST dd_invalid_input_count 00:09:35.139 ************************************ 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.139 03:59:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:35.139 { 00:09:35.139 "subsystems": [ 00:09:35.139 { 00:09:35.139 "subsystem": "bdev", 00:09:35.139 "config": [ 00:09:35.139 { 00:09:35.139 "params": { 00:09:35.139 "block_size": 512, 00:09:35.139 "num_blocks": 512, 00:09:35.139 "name": "malloc0" 00:09:35.139 }, 00:09:35.139 "method": "bdev_malloc_create" 00:09:35.139 }, 00:09:35.139 { 00:09:35.139 "params": { 00:09:35.139 "block_size": 512, 00:09:35.139 "num_blocks": 512, 00:09:35.139 "name": "malloc1" 00:09:35.139 }, 00:09:35.139 "method": "bdev_malloc_create" 00:09:35.139 }, 00:09:35.139 { 00:09:35.139 "method": "bdev_wait_for_examine" 00:09:35.139 } 00:09:35.139 ] 00:09:35.139 } 00:09:35.139 ] 00:09:35.139 } 00:09:35.139 [2024-12-09 03:59:16.961630] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:35.139 [2024-12-09 03:59:16.961744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62344 ] 00:09:35.398 [2024-12-09 03:59:17.112763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.398 [2024-12-09 03:59:17.179015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.398 [2024-12-09 03:59:17.258162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.398 [2024-12-09 03:59:17.338487] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:35.398 [2024-12-09 03:59:17.338562] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.656 [2024-12-09 03:59:17.523837] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.916 00:09:35.916 real 0m0.709s 00:09:35.916 user 0m0.463s 00:09:35.916 sys 0m0.205s 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:35.916 ************************************ 00:09:35.916 END TEST dd_invalid_input_count 00:09:35.916 ************************************ 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.916 ************************************ 00:09:35.916 START TEST dd_invalid_output_count 00:09:35.916 ************************************ 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.916 03:59:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:35.916 { 00:09:35.916 "subsystems": [ 00:09:35.916 { 00:09:35.916 "subsystem": "bdev", 00:09:35.916 "config": [ 00:09:35.916 { 00:09:35.916 "params": { 00:09:35.916 "block_size": 512, 00:09:35.916 "num_blocks": 512, 00:09:35.916 "name": "malloc0" 00:09:35.916 }, 00:09:35.916 "method": "bdev_malloc_create" 00:09:35.916 }, 00:09:35.916 { 00:09:35.916 "method": "bdev_wait_for_examine" 00:09:35.916 } 00:09:35.916 ] 00:09:35.916 } 00:09:35.916 ] 00:09:35.916 } 00:09:35.916 [2024-12-09 03:59:17.730503] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:35.916 [2024-12-09 03:59:17.730649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62377 ] 00:09:36.175 [2024-12-09 03:59:17.878095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.175 [2024-12-09 03:59:17.942343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.175 [2024-12-09 03:59:18.001029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.175 [2024-12-09 03:59:18.058367] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:36.175 [2024-12-09 03:59:18.058428] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.434 [2024-12-09 03:59:18.216839] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:36.434 00:09:36.434 real 0m0.641s 00:09:36.434 user 0m0.408s 00:09:36.434 sys 0m0.179s 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:36.434 ************************************ 00:09:36.434 END TEST dd_invalid_output_count 00:09:36.434 ************************************ 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:36.434 ************************************ 00:09:36.434 START TEST dd_bs_not_multiple 00:09:36.434 ************************************ 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.434 03:59:18 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:36.692 { 00:09:36.692 "subsystems": [ 00:09:36.692 { 00:09:36.692 "subsystem": "bdev", 00:09:36.692 "config": [ 00:09:36.692 { 00:09:36.692 "params": { 00:09:36.692 "block_size": 512, 00:09:36.692 "num_blocks": 512, 00:09:36.692 "name": "malloc0" 00:09:36.692 }, 00:09:36.692 "method": "bdev_malloc_create" 00:09:36.692 }, 00:09:36.692 { 00:09:36.692 "params": { 00:09:36.692 "block_size": 512, 00:09:36.692 "num_blocks": 512, 00:09:36.692 "name": "malloc1" 00:09:36.693 }, 00:09:36.693 "method": "bdev_malloc_create" 00:09:36.693 }, 00:09:36.693 { 00:09:36.693 "method": "bdev_wait_for_examine" 00:09:36.693 } 00:09:36.693 ] 00:09:36.693 } 00:09:36.693 ] 00:09:36.693 } 00:09:36.693 [2024-12-09 03:59:18.432956] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:36.693 [2024-12-09 03:59:18.433063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62414 ] 00:09:36.693 [2024-12-09 03:59:18.585970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.951 [2024-12-09 03:59:18.658223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.951 [2024-12-09 03:59:18.739123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.951 [2024-12-09 03:59:18.819703] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:36.951 [2024-12-09 03:59:18.819801] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.209 [2024-12-09 03:59:18.998047] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.209 00:09:37.209 real 0m0.725s 00:09:37.209 user 0m0.458s 00:09:37.209 sys 0m0.215s 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:37.209 ************************************ 00:09:37.209 END TEST dd_bs_not_multiple 00:09:37.209 ************************************ 00:09:37.209 ************************************ 00:09:37.209 END TEST spdk_dd_negative 00:09:37.209 ************************************ 00:09:37.209 00:09:37.209 real 0m7.782s 00:09:37.209 user 0m4.171s 00:09:37.209 sys 0m2.977s 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.209 03:59:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:37.470 00:09:37.470 real 1m28.049s 00:09:37.470 user 0m56.276s 00:09:37.470 sys 0m39.445s 00:09:37.470 03:59:19 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.470 03:59:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:37.470 ************************************ 00:09:37.470 END TEST spdk_dd 00:09:37.470 ************************************ 00:09:37.470 03:59:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:37.470 03:59:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:37.470 03:59:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:37.470 03:59:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.470 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:09:37.470 03:59:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:37.470 03:59:19 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:37.470 03:59:19 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:37.470 03:59:19 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:37.470 03:59:19 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:37.470 03:59:19 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:37.470 03:59:19 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:37.470 03:59:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.470 03:59:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.470 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:09:37.470 ************************************ 00:09:37.470 START TEST nvmf_tcp 00:09:37.470 ************************************ 00:09:37.470 03:59:19 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:37.470 * Looking for test storage... 00:09:37.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:37.470 03:59:19 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.470 03:59:19 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.470 03:59:19 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.733 03:59:19 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.733 --rc genhtml_branch_coverage=1 00:09:37.733 --rc genhtml_function_coverage=1 00:09:37.733 --rc genhtml_legend=1 00:09:37.733 --rc geninfo_all_blocks=1 00:09:37.733 --rc geninfo_unexecuted_blocks=1 00:09:37.733 00:09:37.733 ' 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.733 --rc genhtml_branch_coverage=1 00:09:37.733 --rc genhtml_function_coverage=1 00:09:37.733 --rc genhtml_legend=1 00:09:37.733 --rc geninfo_all_blocks=1 00:09:37.733 --rc geninfo_unexecuted_blocks=1 00:09:37.733 00:09:37.733 ' 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.733 --rc genhtml_branch_coverage=1 00:09:37.733 --rc genhtml_function_coverage=1 00:09:37.733 --rc genhtml_legend=1 00:09:37.733 --rc geninfo_all_blocks=1 00:09:37.733 --rc geninfo_unexecuted_blocks=1 00:09:37.733 00:09:37.733 ' 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.733 --rc genhtml_branch_coverage=1 00:09:37.733 --rc genhtml_function_coverage=1 00:09:37.733 --rc genhtml_legend=1 00:09:37.733 --rc geninfo_all_blocks=1 00:09:37.733 --rc geninfo_unexecuted_blocks=1 00:09:37.733 00:09:37.733 ' 00:09:37.733 03:59:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:37.733 03:59:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:37.733 03:59:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.733 03:59:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.733 ************************************ 00:09:37.733 START TEST nvmf_target_core 00:09:37.733 ************************************ 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:37.733 * Looking for test storage... 00:09:37.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.733 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.734 --rc genhtml_branch_coverage=1 00:09:37.734 --rc genhtml_function_coverage=1 00:09:37.734 --rc genhtml_legend=1 00:09:37.734 --rc geninfo_all_blocks=1 00:09:37.734 --rc geninfo_unexecuted_blocks=1 00:09:37.734 00:09:37.734 ' 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.734 --rc genhtml_branch_coverage=1 00:09:37.734 --rc genhtml_function_coverage=1 00:09:37.734 --rc genhtml_legend=1 00:09:37.734 --rc geninfo_all_blocks=1 00:09:37.734 --rc geninfo_unexecuted_blocks=1 00:09:37.734 00:09:37.734 ' 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.734 --rc genhtml_branch_coverage=1 00:09:37.734 --rc genhtml_function_coverage=1 00:09:37.734 --rc genhtml_legend=1 00:09:37.734 --rc geninfo_all_blocks=1 00:09:37.734 --rc geninfo_unexecuted_blocks=1 00:09:37.734 00:09:37.734 ' 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.734 --rc genhtml_branch_coverage=1 00:09:37.734 --rc genhtml_function_coverage=1 00:09:37.734 --rc genhtml_legend=1 00:09:37.734 --rc geninfo_all_blocks=1 00:09:37.734 --rc geninfo_unexecuted_blocks=1 00:09:37.734 00:09:37.734 ' 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.734 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.994 03:59:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.995 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.995 ************************************ 00:09:37.995 START TEST nvmf_host_management 00:09:37.995 ************************************ 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:37.995 * Looking for test storage... 00:09:37.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.995 --rc genhtml_branch_coverage=1 00:09:37.995 --rc genhtml_function_coverage=1 00:09:37.995 --rc genhtml_legend=1 00:09:37.995 --rc geninfo_all_blocks=1 00:09:37.995 --rc geninfo_unexecuted_blocks=1 00:09:37.995 00:09:37.995 ' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.995 --rc genhtml_branch_coverage=1 00:09:37.995 --rc genhtml_function_coverage=1 00:09:37.995 --rc genhtml_legend=1 00:09:37.995 --rc geninfo_all_blocks=1 00:09:37.995 --rc geninfo_unexecuted_blocks=1 00:09:37.995 00:09:37.995 ' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.995 --rc genhtml_branch_coverage=1 00:09:37.995 --rc genhtml_function_coverage=1 00:09:37.995 --rc genhtml_legend=1 00:09:37.995 --rc geninfo_all_blocks=1 00:09:37.995 --rc geninfo_unexecuted_blocks=1 00:09:37.995 00:09:37.995 ' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.995 --rc genhtml_branch_coverage=1 00:09:37.995 --rc genhtml_function_coverage=1 00:09:37.995 --rc genhtml_legend=1 00:09:37.995 --rc geninfo_all_blocks=1 00:09:37.995 --rc geninfo_unexecuted_blocks=1 00:09:37.995 00:09:37.995 ' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.995 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.996 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.996 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:38.255 Cannot find device "nvmf_init_br" 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:38.255 Cannot find device "nvmf_init_br2" 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:38.255 Cannot find device "nvmf_tgt_br" 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.255 Cannot find device "nvmf_tgt_br2" 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:38.255 03:59:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:38.255 Cannot find device "nvmf_init_br" 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:38.255 Cannot find device "nvmf_init_br2" 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:38.255 Cannot find device "nvmf_tgt_br" 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:38.255 Cannot find device "nvmf_tgt_br2" 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:38.255 Cannot find device "nvmf_br" 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:38.255 Cannot find device "nvmf_init_if" 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:38.255 Cannot find device "nvmf_init_if2" 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:38.255 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.256 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.256 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.256 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.256 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.256 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:38.256 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:38.256 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:38.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:09:38.514 00:09:38.514 --- 10.0.0.3 ping statistics --- 00:09:38.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.514 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:38.514 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:38.514 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:09:38.514 00:09:38.514 --- 10.0.0.4 ping statistics --- 00:09:38.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.514 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:38.514 00:09:38.514 --- 10.0.0.1 ping statistics --- 00:09:38.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.514 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:38.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:09:38.514 00:09:38.514 --- 10.0.0.2 ping statistics --- 00:09:38.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.514 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.514 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:38.515 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.515 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.515 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.515 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.515 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.515 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.515 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62756 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62756 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62756 ']' 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.773 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:38.773 [2024-12-09 03:59:20.542380] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:38.773 [2024-12-09 03:59:20.542508] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.773 [2024-12-09 03:59:20.703217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.032 [2024-12-09 03:59:20.800302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.032 [2024-12-09 03:59:20.800651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.032 [2024-12-09 03:59:20.800820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.032 [2024-12-09 03:59:20.800971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.032 [2024-12-09 03:59:20.801014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.032 [2024-12-09 03:59:20.802673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.032 [2024-12-09 03:59:20.802779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.032 [2024-12-09 03:59:20.803067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:39.032 [2024-12-09 03:59:20.803078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.032 [2024-12-09 03:59:20.882578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 [2024-12-09 03:59:21.680410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 Malloc0 00:09:40.008 [2024-12-09 03:59:21.772159] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62816 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62816 /var/tmp/bdevperf.sock 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62816 ']' 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.008 { 00:09:40.008 "params": { 00:09:40.008 "name": "Nvme$subsystem", 00:09:40.008 "trtype": "$TEST_TRANSPORT", 00:09:40.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.008 "adrfam": "ipv4", 00:09:40.008 "trsvcid": "$NVMF_PORT", 00:09:40.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.008 "hdgst": ${hdgst:-false}, 00:09:40.008 "ddgst": ${ddgst:-false} 00:09:40.008 }, 00:09:40.008 "method": "bdev_nvme_attach_controller" 00:09:40.008 } 00:09:40.008 EOF 00:09:40.008 )") 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:40.008 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.008 "params": { 00:09:40.008 "name": "Nvme0", 00:09:40.008 "trtype": "tcp", 00:09:40.008 "traddr": "10.0.0.3", 00:09:40.008 "adrfam": "ipv4", 00:09:40.008 "trsvcid": "4420", 00:09:40.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:40.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:40.008 "hdgst": false, 00:09:40.008 "ddgst": false 00:09:40.008 }, 00:09:40.008 "method": "bdev_nvme_attach_controller" 00:09:40.008 }' 00:09:40.008 [2024-12-09 03:59:21.882164] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:40.008 [2024-12-09 03:59:21.882503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62816 ] 00:09:40.267 [2024-12-09 03:59:22.033928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.267 [2024-12-09 03:59:22.118756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.267 [2024-12-09 03:59:22.207981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.526 Running I/O for 10 seconds... 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.102 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.102 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:09:41.102 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:09:41.102 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:41.102 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:41.102 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:41.102 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:41.102 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.103 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.103 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.103 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:41.103 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.103 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.103 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.103 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:41.103 [2024-12-09 03:59:23.039760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.039959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.039995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.103 [2024-12-09 03:59:23.040809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.103 [2024-12-09 03:59:23.040819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.040831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.040841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.040853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.040862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.040874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.040885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.040897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.040907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.040919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.040929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.040941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.040950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.040962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.040971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.040983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.040993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.104 [2024-12-09 03:59:23.041494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51c00 is same with the state(6) to be set 00:09:41.104 [2024-12-09 03:59:23.041730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:41.104 [2024-12-09 03:59:23.041750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:41.104 [2024-12-09 03:59:23.041772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:41.104 [2024-12-09 03:59:23.041792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:41.104 [2024-12-09 03:59:23.041812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.104 [2024-12-09 03:59:23.041822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b52ce0 is same with the state(6) to be set 00:09:41.104 [2024-12-09 03:59:23.042938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:41.104 task offset: 122880 on job bdev=Nvme0n1 fails 00:09:41.104 00:09:41.104 Latency(us) 00:09:41.104 [2024-12-09T03:59:23.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.104 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:41.104 Job: Nvme0n1 ended in about 0.69 seconds with error 00:09:41.104 Verification LBA range: start 0x0 length 0x400 00:09:41.104 Nvme0n1 : 0.69 1381.74 86.36 92.12 0.00 42375.13 2517.18 39321.60 00:09:41.104 [2024-12-09T03:59:23.054Z] =================================================================================================================== 00:09:41.104 [2024-12-09T03:59:23.054Z] Total : 1381.74 86.36 92.12 0.00 42375.13 2517.18 39321.60 00:09:41.104 [2024-12-09 03:59:23.045869] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:41.104 [2024-12-09 03:59:23.045997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b52ce0 (9): Bad file descriptor 00:09:41.363 [2024-12-09 03:59:23.054777] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62816 00:09:42.298 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62816) - No such process 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.298 { 00:09:42.298 "params": { 00:09:42.298 "name": "Nvme$subsystem", 00:09:42.298 "trtype": "$TEST_TRANSPORT", 00:09:42.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.298 "adrfam": "ipv4", 00:09:42.298 "trsvcid": "$NVMF_PORT", 00:09:42.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.298 "hdgst": ${hdgst:-false}, 00:09:42.298 "ddgst": ${ddgst:-false} 00:09:42.298 }, 00:09:42.298 "method": "bdev_nvme_attach_controller" 00:09:42.298 } 00:09:42.298 EOF 00:09:42.298 )") 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:42.298 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.298 "params": { 00:09:42.298 "name": "Nvme0", 00:09:42.298 "trtype": "tcp", 00:09:42.298 "traddr": "10.0.0.3", 00:09:42.298 "adrfam": "ipv4", 00:09:42.298 "trsvcid": "4420", 00:09:42.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:42.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:42.298 "hdgst": false, 00:09:42.298 "ddgst": false 00:09:42.298 }, 00:09:42.298 "method": "bdev_nvme_attach_controller" 00:09:42.298 }' 00:09:42.298 [2024-12-09 03:59:24.103129] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:42.298 [2024-12-09 03:59:24.103258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62854 ] 00:09:42.557 [2024-12-09 03:59:24.252715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.557 [2024-12-09 03:59:24.328771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.557 [2024-12-09 03:59:24.418588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.816 Running I/O for 1 seconds... 00:09:43.762 1472.00 IOPS, 92.00 MiB/s 00:09:43.762 Latency(us) 00:09:43.762 [2024-12-09T03:59:25.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.762 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:43.762 Verification LBA range: start 0x0 length 0x400 00:09:43.762 Nvme0n1 : 1.03 1496.52 93.53 0.00 0.00 41926.84 4587.52 39559.91 00:09:43.762 [2024-12-09T03:59:25.712Z] =================================================================================================================== 00:09:43.762 [2024-12-09T03:59:25.712Z] Total : 1496.52 93.53 0.00 0.00 41926.84 4587.52 39559.91 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.020 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.020 rmmod nvme_tcp 00:09:44.020 rmmod nvme_fabrics 00:09:44.278 rmmod nvme_keyring 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62756 ']' 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62756 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62756 ']' 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62756 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.278 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62756 00:09:44.278 killing process with pid 62756 00:09:44.278 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:44.278 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:44.278 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62756' 00:09:44.278 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62756 00:09:44.278 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62756 00:09:44.567 [2024-12-09 03:59:26.307719] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:44.567 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:44.850 00:09:44.850 real 0m6.912s 00:09:44.850 user 0m25.223s 00:09:44.850 sys 0m1.843s 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.850 ************************************ 00:09:44.850 END TEST nvmf_host_management 00:09:44.850 ************************************ 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.850 ************************************ 00:09:44.850 START TEST nvmf_lvol 00:09:44.850 ************************************ 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:44.850 * Looking for test storage... 00:09:44.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.850 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.110 --rc genhtml_branch_coverage=1 00:09:45.110 --rc genhtml_function_coverage=1 00:09:45.110 --rc genhtml_legend=1 00:09:45.110 --rc geninfo_all_blocks=1 00:09:45.110 --rc geninfo_unexecuted_blocks=1 00:09:45.110 00:09:45.110 ' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.110 --rc genhtml_branch_coverage=1 00:09:45.110 --rc genhtml_function_coverage=1 00:09:45.110 --rc genhtml_legend=1 00:09:45.110 --rc geninfo_all_blocks=1 00:09:45.110 --rc geninfo_unexecuted_blocks=1 00:09:45.110 00:09:45.110 ' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.110 --rc genhtml_branch_coverage=1 00:09:45.110 --rc genhtml_function_coverage=1 00:09:45.110 --rc genhtml_legend=1 00:09:45.110 --rc geninfo_all_blocks=1 00:09:45.110 --rc geninfo_unexecuted_blocks=1 00:09:45.110 00:09:45.110 ' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.110 --rc genhtml_branch_coverage=1 00:09:45.110 --rc genhtml_function_coverage=1 00:09:45.110 --rc genhtml_legend=1 00:09:45.110 --rc geninfo_all_blocks=1 00:09:45.110 --rc geninfo_unexecuted_blocks=1 00:09:45.110 00:09:45.110 ' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.110 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.110 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:45.110 Cannot find device "nvmf_init_br" 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:45.111 Cannot find device "nvmf_init_br2" 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:45.111 Cannot find device "nvmf_tgt_br" 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.111 Cannot find device "nvmf_tgt_br2" 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:45.111 Cannot find device "nvmf_init_br" 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:45.111 Cannot find device "nvmf_init_br2" 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:45.111 Cannot find device "nvmf_tgt_br" 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:45.111 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:45.111 Cannot find device "nvmf_tgt_br2" 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:45.111 Cannot find device "nvmf_br" 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:45.111 Cannot find device "nvmf_init_if" 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:45.111 Cannot find device "nvmf_init_if2" 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:45.111 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:45.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:45.390 00:09:45.390 --- 10.0.0.3 ping statistics --- 00:09:45.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.390 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:45.390 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:45.390 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:09:45.390 00:09:45.390 --- 10.0.0.4 ping statistics --- 00:09:45.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.390 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:09:45.390 00:09:45.390 --- 10.0.0.1 ping statistics --- 00:09:45.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.390 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:45.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:09:45.390 00:09:45.390 --- 10.0.0.2 ping statistics --- 00:09:45.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.390 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.390 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63124 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63124 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 63124 ']' 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.391 03:59:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:45.649 [2024-12-09 03:59:27.386655] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:09:45.649 [2024-12-09 03:59:27.386755] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.649 [2024-12-09 03:59:27.570082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.908 [2024-12-09 03:59:27.660848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.908 [2024-12-09 03:59:27.660894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.908 [2024-12-09 03:59:27.660905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.908 [2024-12-09 03:59:27.660914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.908 [2024-12-09 03:59:27.660921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.908 [2024-12-09 03:59:27.662383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.908 [2024-12-09 03:59:27.662461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.908 [2024-12-09 03:59:27.662464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.908 [2024-12-09 03:59:27.735655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.474 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.474 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:46.474 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.474 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.474 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:46.754 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.754 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:47.013 [2024-12-09 03:59:28.747779] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.013 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.271 03:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:47.271 03:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.528 03:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:47.528 03:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:47.787 03:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:48.044 03:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9c53c949-ff16-401a-8b91-fdfdb59b560e 00:09:48.044 03:59:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9c53c949-ff16-401a-8b91-fdfdb59b560e lvol 20 00:09:48.303 03:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b6f841bc-aadf-4d91-ba93-eaaa219d3db6 00:09:48.303 03:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:48.561 03:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b6f841bc-aadf-4d91-ba93-eaaa219d3db6 00:09:48.819 03:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:49.080 [2024-12-09 03:59:30.956433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:49.080 03:59:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:49.644 03:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63205 00:09:49.644 03:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:49.644 03:59:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:50.577 03:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b6f841bc-aadf-4d91-ba93-eaaa219d3db6 MY_SNAPSHOT 00:09:50.835 03:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d96c6a4b-b4c5-484b-9f9d-c7ac63a20df3 00:09:50.835 03:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b6f841bc-aadf-4d91-ba93-eaaa219d3db6 30 00:09:51.093 03:59:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d96c6a4b-b4c5-484b-9f9d-c7ac63a20df3 MY_CLONE 00:09:51.659 03:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=097571a0-f8a2-4340-9b6f-14ac1a4fd379 00:09:51.659 03:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 097571a0-f8a2-4340-9b6f-14ac1a4fd379 00:09:51.917 03:59:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63205 00:10:00.028 Initializing NVMe Controllers 00:10:00.028 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:00.028 Controller IO queue size 128, less than required. 00:10:00.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.028 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:00.028 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:00.028 Initialization complete. Launching workers. 00:10:00.028 ======================================================== 00:10:00.028 Latency(us) 00:10:00.029 Device Information : IOPS MiB/s Average min max 00:10:00.029 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9535.59 37.25 13424.17 4480.74 67551.00 00:10:00.029 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9365.09 36.58 13669.98 4521.45 67799.49 00:10:00.029 ======================================================== 00:10:00.029 Total : 18900.69 73.83 13545.97 4480.74 67799.49 00:10:00.029 00:10:00.029 03:59:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:00.287 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b6f841bc-aadf-4d91-ba93-eaaa219d3db6 00:10:00.546 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c53c949-ff16-401a-8b91-fdfdb59b560e 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.803 rmmod nvme_tcp 00:10:00.803 rmmod nvme_fabrics 00:10:00.803 rmmod nvme_keyring 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63124 ']' 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63124 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 63124 ']' 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 63124 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.803 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63124 00:10:00.803 killing process with pid 63124 00:10:00.804 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.804 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.804 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63124' 00:10:00.804 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 63124 00:10:00.804 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 63124 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:01.370 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:01.371 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:01.371 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.371 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.629 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:01.629 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:01.630 ************************************ 00:10:01.630 END TEST nvmf_lvol 00:10:01.630 ************************************ 00:10:01.630 00:10:01.630 real 0m16.701s 00:10:01.630 user 1m7.600s 00:10:01.630 sys 0m4.497s 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.630 ************************************ 00:10:01.630 START TEST nvmf_lvs_grow 00:10:01.630 ************************************ 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:01.630 * Looking for test storage... 00:10:01.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:10:01.630 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:01.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.889 --rc genhtml_branch_coverage=1 00:10:01.889 --rc genhtml_function_coverage=1 00:10:01.889 --rc genhtml_legend=1 00:10:01.889 --rc geninfo_all_blocks=1 00:10:01.889 --rc geninfo_unexecuted_blocks=1 00:10:01.889 00:10:01.889 ' 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:01.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.889 --rc genhtml_branch_coverage=1 00:10:01.889 --rc genhtml_function_coverage=1 00:10:01.889 --rc genhtml_legend=1 00:10:01.889 --rc geninfo_all_blocks=1 00:10:01.889 --rc geninfo_unexecuted_blocks=1 00:10:01.889 00:10:01.889 ' 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:01.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.889 --rc genhtml_branch_coverage=1 00:10:01.889 --rc genhtml_function_coverage=1 00:10:01.889 --rc genhtml_legend=1 00:10:01.889 --rc geninfo_all_blocks=1 00:10:01.889 --rc geninfo_unexecuted_blocks=1 00:10:01.889 00:10:01.889 ' 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:01.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.889 --rc genhtml_branch_coverage=1 00:10:01.889 --rc genhtml_function_coverage=1 00:10:01.889 --rc genhtml_legend=1 00:10:01.889 --rc geninfo_all_blocks=1 00:10:01.889 --rc geninfo_unexecuted_blocks=1 00:10:01.889 00:10:01.889 ' 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.889 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.890 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:01.890 Cannot find device "nvmf_init_br" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:01.890 Cannot find device "nvmf_init_br2" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:01.890 Cannot find device "nvmf_tgt_br" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.890 Cannot find device "nvmf_tgt_br2" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:01.890 Cannot find device "nvmf_init_br" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:01.890 Cannot find device "nvmf_init_br2" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:01.890 Cannot find device "nvmf_tgt_br" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:01.890 Cannot find device "nvmf_tgt_br2" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:01.890 Cannot find device "nvmf_br" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:01.890 Cannot find device "nvmf_init_if" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:01.890 Cannot find device "nvmf_init_if2" 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:01.890 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:02.149 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:02.149 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:02.149 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:02.149 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:02.149 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:02.150 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:02.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:02.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:10:02.150 00:10:02.150 --- 10.0.0.3 ping statistics --- 00:10:02.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.150 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:02.150 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:02.150 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:02.150 00:10:02.150 --- 10.0.0.4 ping statistics --- 00:10:02.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.150 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:02.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:02.150 00:10:02.150 --- 10.0.0.1 ping statistics --- 00:10:02.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.150 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:02.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:10:02.150 00:10:02.150 --- 10.0.0.2 ping statistics --- 00:10:02.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.150 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63584 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63584 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63584 ']' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.150 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 [2024-12-09 03:59:44.142193] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:02.409 [2024-12-09 03:59:44.142575] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.409 [2024-12-09 03:59:44.293029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.666 [2024-12-09 03:59:44.371177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.666 [2024-12-09 03:59:44.371273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.666 [2024-12-09 03:59:44.371301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.666 [2024-12-09 03:59:44.371346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.666 [2024-12-09 03:59:44.371355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.666 [2024-12-09 03:59:44.371820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.666 [2024-12-09 03:59:44.450395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.666 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.666 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:02.666 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.666 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.666 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:02.666 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.666 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:03.233 [2024-12-09 03:59:44.887622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:03.233 ************************************ 00:10:03.233 START TEST lvs_grow_clean 00:10:03.233 ************************************ 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.233 03:59:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.491 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:03.491 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:03.750 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:03.750 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:03.750 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:04.007 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:04.007 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:04.007 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 lvol 150 00:10:04.264 03:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=73bf659b-32fd-4136-a76a-e074aea5c8d7 00:10:04.264 03:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.264 03:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:04.523 [2024-12-09 03:59:46.436214] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:04.523 [2024-12-09 03:59:46.436321] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:04.523 true 00:10:04.523 03:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:04.523 03:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:04.781 03:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:04.781 03:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:05.085 03:59:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73bf659b-32fd-4136-a76a-e074aea5c8d7 00:10:05.388 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:05.647 [2024-12-09 03:59:47.493043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:05.647 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:05.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63665 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63665 /var/tmp/bdevperf.sock 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63665 ']' 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.905 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:05.905 [2024-12-09 03:59:47.844352] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:05.905 [2024-12-09 03:59:47.844756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63665 ] 00:10:06.163 [2024-12-09 03:59:48.001495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.163 [2024-12-09 03:59:48.088063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.421 [2024-12-09 03:59:48.171412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.987 03:59:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.987 03:59:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:06.987 03:59:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:07.555 Nvme0n1 00:10:07.555 03:59:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:07.814 [ 00:10:07.814 { 00:10:07.814 "name": "Nvme0n1", 00:10:07.814 "aliases": [ 00:10:07.814 "73bf659b-32fd-4136-a76a-e074aea5c8d7" 00:10:07.814 ], 00:10:07.814 "product_name": "NVMe disk", 00:10:07.814 "block_size": 4096, 00:10:07.814 "num_blocks": 38912, 00:10:07.814 "uuid": "73bf659b-32fd-4136-a76a-e074aea5c8d7", 00:10:07.814 "numa_id": -1, 00:10:07.814 "assigned_rate_limits": { 00:10:07.814 "rw_ios_per_sec": 0, 00:10:07.814 "rw_mbytes_per_sec": 0, 00:10:07.814 "r_mbytes_per_sec": 0, 00:10:07.814 "w_mbytes_per_sec": 0 00:10:07.814 }, 00:10:07.814 "claimed": false, 00:10:07.814 "zoned": false, 00:10:07.814 "supported_io_types": { 00:10:07.814 "read": true, 00:10:07.814 "write": true, 00:10:07.814 "unmap": true, 00:10:07.814 "flush": true, 00:10:07.814 "reset": true, 00:10:07.814 "nvme_admin": true, 00:10:07.814 "nvme_io": true, 00:10:07.814 "nvme_io_md": false, 00:10:07.814 "write_zeroes": true, 00:10:07.814 "zcopy": false, 00:10:07.814 "get_zone_info": false, 00:10:07.814 "zone_management": false, 00:10:07.814 "zone_append": false, 00:10:07.814 "compare": true, 00:10:07.814 "compare_and_write": true, 00:10:07.814 "abort": true, 00:10:07.814 "seek_hole": false, 00:10:07.814 "seek_data": false, 00:10:07.814 "copy": true, 00:10:07.814 "nvme_iov_md": false 00:10:07.814 }, 00:10:07.814 "memory_domains": [ 00:10:07.814 { 00:10:07.814 "dma_device_id": "system", 00:10:07.814 "dma_device_type": 1 00:10:07.814 } 00:10:07.814 ], 00:10:07.814 "driver_specific": { 00:10:07.814 "nvme": [ 00:10:07.814 { 00:10:07.814 "trid": { 00:10:07.814 "trtype": "TCP", 00:10:07.814 "adrfam": "IPv4", 00:10:07.814 "traddr": "10.0.0.3", 00:10:07.814 "trsvcid": "4420", 00:10:07.814 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:07.814 }, 00:10:07.814 "ctrlr_data": { 00:10:07.814 "cntlid": 1, 00:10:07.814 "vendor_id": "0x8086", 00:10:07.814 "model_number": "SPDK bdev Controller", 00:10:07.814 "serial_number": "SPDK0", 00:10:07.814 "firmware_revision": "25.01", 00:10:07.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:07.814 "oacs": { 00:10:07.815 "security": 0, 00:10:07.815 "format": 0, 00:10:07.815 "firmware": 0, 00:10:07.815 "ns_manage": 0 00:10:07.815 }, 00:10:07.815 "multi_ctrlr": true, 00:10:07.815 "ana_reporting": false 00:10:07.815 }, 00:10:07.815 "vs": { 00:10:07.815 "nvme_version": "1.3" 00:10:07.815 }, 00:10:07.815 "ns_data": { 00:10:07.815 "id": 1, 00:10:07.815 "can_share": true 00:10:07.815 } 00:10:07.815 } 00:10:07.815 ], 00:10:07.815 "mp_policy": "active_passive" 00:10:07.815 } 00:10:07.815 } 00:10:07.815 ] 00:10:07.815 03:59:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63694 00:10:07.815 03:59:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:07.815 03:59:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:07.815 Running I/O for 10 seconds... 00:10:08.751 Latency(us) 00:10:08.751 [2024-12-09T03:59:50.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.751 Nvme0n1 : 1.00 6741.00 26.33 0.00 0.00 0.00 0.00 0.00 00:10:08.751 [2024-12-09T03:59:50.701Z] =================================================================================================================== 00:10:08.751 [2024-12-09T03:59:50.701Z] Total : 6741.00 26.33 0.00 0.00 0.00 0.00 0.00 00:10:08.751 00:10:09.685 03:59:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:09.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.944 Nvme0n1 : 2.00 6736.00 26.31 0.00 0.00 0.00 0.00 0.00 00:10:09.944 [2024-12-09T03:59:51.894Z] =================================================================================================================== 00:10:09.944 [2024-12-09T03:59:51.894Z] Total : 6736.00 26.31 0.00 0.00 0.00 0.00 0.00 00:10:09.944 00:10:09.944 true 00:10:09.944 03:59:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:09.944 03:59:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:10.509 03:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:10.509 03:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:10.509 03:59:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63694 00:10:10.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.775 Nvme0n1 : 3.00 6692.00 26.14 0.00 0.00 0.00 0.00 0.00 00:10:10.775 [2024-12-09T03:59:52.725Z] =================================================================================================================== 00:10:10.775 [2024-12-09T03:59:52.725Z] Total : 6692.00 26.14 0.00 0.00 0.00 0.00 0.00 00:10:10.775 00:10:11.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.757 Nvme0n1 : 4.00 6733.50 26.30 0.00 0.00 0.00 0.00 0.00 00:10:11.757 [2024-12-09T03:59:53.707Z] =================================================================================================================== 00:10:11.757 [2024-12-09T03:59:53.707Z] Total : 6733.50 26.30 0.00 0.00 0.00 0.00 0.00 00:10:11.757 00:10:13.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.134 Nvme0n1 : 5.00 6733.00 26.30 0.00 0.00 0.00 0.00 0.00 00:10:13.134 [2024-12-09T03:59:55.084Z] =================================================================================================================== 00:10:13.134 [2024-12-09T03:59:55.084Z] Total : 6733.00 26.30 0.00 0.00 0.00 0.00 0.00 00:10:13.134 00:10:14.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.067 Nvme0n1 : 6.00 6700.50 26.17 0.00 0.00 0.00 0.00 0.00 00:10:14.067 [2024-12-09T03:59:56.017Z] =================================================================================================================== 00:10:14.067 [2024-12-09T03:59:56.017Z] Total : 6700.50 26.17 0.00 0.00 0.00 0.00 0.00 00:10:14.067 00:10:15.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.000 Nvme0n1 : 7.00 6741.14 26.33 0.00 0.00 0.00 0.00 0.00 00:10:15.000 [2024-12-09T03:59:56.950Z] =================================================================================================================== 00:10:15.000 [2024-12-09T03:59:56.950Z] Total : 6741.14 26.33 0.00 0.00 0.00 0.00 0.00 00:10:15.000 00:10:15.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.934 Nvme0n1 : 8.00 6771.62 26.45 0.00 0.00 0.00 0.00 0.00 00:10:15.934 [2024-12-09T03:59:57.884Z] =================================================================================================================== 00:10:15.934 [2024-12-09T03:59:57.884Z] Total : 6771.62 26.45 0.00 0.00 0.00 0.00 0.00 00:10:15.934 00:10:16.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.867 Nvme0n1 : 9.00 6753.00 26.38 0.00 0.00 0.00 0.00 0.00 00:10:16.867 [2024-12-09T03:59:58.817Z] =================================================================================================================== 00:10:16.867 [2024-12-09T03:59:58.817Z] Total : 6753.00 26.38 0.00 0.00 0.00 0.00 0.00 00:10:16.867 00:10:17.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.799 Nvme0n1 : 10.00 6738.10 26.32 0.00 0.00 0.00 0.00 0.00 00:10:17.799 [2024-12-09T03:59:59.749Z] =================================================================================================================== 00:10:17.799 [2024-12-09T03:59:59.749Z] Total : 6738.10 26.32 0.00 0.00 0.00 0.00 0.00 00:10:17.799 00:10:17.799 00:10:17.799 Latency(us) 00:10:17.799 [2024-12-09T03:59:59.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.799 Nvme0n1 : 10.00 6747.87 26.36 0.00 0.00 18962.64 6523.81 111053.73 00:10:17.799 [2024-12-09T03:59:59.749Z] =================================================================================================================== 00:10:17.799 [2024-12-09T03:59:59.749Z] Total : 6747.87 26.36 0.00 0.00 18962.64 6523.81 111053.73 00:10:17.799 { 00:10:17.799 "results": [ 00:10:17.799 { 00:10:17.799 "job": "Nvme0n1", 00:10:17.799 "core_mask": "0x2", 00:10:17.799 "workload": "randwrite", 00:10:17.799 "status": "finished", 00:10:17.799 "queue_depth": 128, 00:10:17.799 "io_size": 4096, 00:10:17.799 "runtime": 10.004496, 00:10:17.799 "iops": 6747.866159374745, 00:10:17.799 "mibps": 26.358852185057597, 00:10:17.799 "io_failed": 0, 00:10:17.799 "io_timeout": 0, 00:10:17.799 "avg_latency_us": 18962.637900670485, 00:10:17.799 "min_latency_us": 6523.810909090909, 00:10:17.799 "max_latency_us": 111053.73090909091 00:10:17.799 } 00:10:17.799 ], 00:10:17.799 "core_count": 1 00:10:17.799 } 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63665 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63665 ']' 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63665 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63665 00:10:17.799 killing process with pid 63665 00:10:17.799 Received shutdown signal, test time was about 10.000000 seconds 00:10:17.799 00:10:17.799 Latency(us) 00:10:17.799 [2024-12-09T03:59:59.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.799 [2024-12-09T03:59:59.749Z] =================================================================================================================== 00:10:17.799 [2024-12-09T03:59:59.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63665' 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63665 00:10:17.799 03:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63665 00:10:18.365 04:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:18.365 04:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:18.622 04:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:18.622 04:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:19.186 04:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:19.186 04:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:19.186 04:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:19.186 [2024-12-09 04:00:01.055918] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:19.186 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:19.443 request: 00:10:19.443 { 00:10:19.443 "uuid": "a1fbfb9d-f0d1-4421-a3c9-5f52957d5296", 00:10:19.443 "method": "bdev_lvol_get_lvstores", 00:10:19.443 "req_id": 1 00:10:19.443 } 00:10:19.443 Got JSON-RPC error response 00:10:19.443 response: 00:10:19.443 { 00:10:19.443 "code": -19, 00:10:19.443 "message": "No such device" 00:10:19.443 } 00:10:19.443 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:19.443 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:19.443 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:19.443 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:19.443 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:20.009 aio_bdev 00:10:20.009 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 73bf659b-32fd-4136-a76a-e074aea5c8d7 00:10:20.009 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=73bf659b-32fd-4136-a76a-e074aea5c8d7 00:10:20.009 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.009 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:20.009 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.009 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.009 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:20.267 04:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73bf659b-32fd-4136-a76a-e074aea5c8d7 -t 2000 00:10:20.267 [ 00:10:20.267 { 00:10:20.267 "name": "73bf659b-32fd-4136-a76a-e074aea5c8d7", 00:10:20.267 "aliases": [ 00:10:20.267 "lvs/lvol" 00:10:20.267 ], 00:10:20.267 "product_name": "Logical Volume", 00:10:20.267 "block_size": 4096, 00:10:20.267 "num_blocks": 38912, 00:10:20.267 "uuid": "73bf659b-32fd-4136-a76a-e074aea5c8d7", 00:10:20.267 "assigned_rate_limits": { 00:10:20.267 "rw_ios_per_sec": 0, 00:10:20.267 "rw_mbytes_per_sec": 0, 00:10:20.267 "r_mbytes_per_sec": 0, 00:10:20.267 "w_mbytes_per_sec": 0 00:10:20.267 }, 00:10:20.267 "claimed": false, 00:10:20.267 "zoned": false, 00:10:20.267 "supported_io_types": { 00:10:20.267 "read": true, 00:10:20.267 "write": true, 00:10:20.267 "unmap": true, 00:10:20.267 "flush": false, 00:10:20.267 "reset": true, 00:10:20.267 "nvme_admin": false, 00:10:20.267 "nvme_io": false, 00:10:20.267 "nvme_io_md": false, 00:10:20.267 "write_zeroes": true, 00:10:20.267 "zcopy": false, 00:10:20.268 "get_zone_info": false, 00:10:20.268 "zone_management": false, 00:10:20.268 "zone_append": false, 00:10:20.268 "compare": false, 00:10:20.268 "compare_and_write": false, 00:10:20.268 "abort": false, 00:10:20.268 "seek_hole": true, 00:10:20.268 "seek_data": true, 00:10:20.268 "copy": false, 00:10:20.268 "nvme_iov_md": false 00:10:20.268 }, 00:10:20.268 "driver_specific": { 00:10:20.268 "lvol": { 00:10:20.268 "lvol_store_uuid": "a1fbfb9d-f0d1-4421-a3c9-5f52957d5296", 00:10:20.268 "base_bdev": "aio_bdev", 00:10:20.268 "thin_provision": false, 00:10:20.268 "num_allocated_clusters": 38, 00:10:20.268 "snapshot": false, 00:10:20.268 "clone": false, 00:10:20.268 "esnap_clone": false 00:10:20.268 } 00:10:20.268 } 00:10:20.268 } 00:10:20.268 ] 00:10:20.268 04:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:20.525 04:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:20.525 04:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:20.782 04:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:20.782 04:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:20.782 04:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:21.040 04:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:21.040 04:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 73bf659b-32fd-4136-a76a-e074aea5c8d7 00:10:21.297 04:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1fbfb9d-f0d1-4421-a3c9-5f52957d5296 00:10:21.563 04:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:21.839 04:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.406 ************************************ 00:10:22.406 END TEST lvs_grow_clean 00:10:22.406 ************************************ 00:10:22.406 00:10:22.406 real 0m19.238s 00:10:22.406 user 0m18.137s 00:10:22.406 sys 0m2.794s 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:22.406 ************************************ 00:10:22.406 START TEST lvs_grow_dirty 00:10:22.406 ************************************ 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.406 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:22.664 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:22.664 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:22.922 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:22.922 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:22.922 04:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:23.491 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:23.491 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:23.491 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f lvol 150 00:10:23.491 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bfaeea78-b703-48e8-a098-0a265f7257d8 00:10:23.491 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:23.491 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:23.749 [2024-12-09 04:00:05.657014] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:23.749 [2024-12-09 04:00:05.657133] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:23.749 true 00:10:23.749 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:23.749 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:24.314 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:24.314 04:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:24.572 04:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bfaeea78-b703-48e8-a098-0a265f7257d8 00:10:24.829 04:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:24.829 [2024-12-09 04:00:06.757656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:25.086 04:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63946 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:25.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63946 /var/tmp/bdevperf.sock 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63946 ']' 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.086 04:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:25.343 [2024-12-09 04:00:07.074984] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:25.343 [2024-12-09 04:00:07.075289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63946 ] 00:10:25.343 [2024-12-09 04:00:07.226475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.343 [2024-12-09 04:00:07.283601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.600 [2024-12-09 04:00:07.358244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:26.166 04:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.166 04:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:26.166 04:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:26.437 Nvme0n1 00:10:26.437 04:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:27.075 [ 00:10:27.075 { 00:10:27.075 "name": "Nvme0n1", 00:10:27.075 "aliases": [ 00:10:27.075 "bfaeea78-b703-48e8-a098-0a265f7257d8" 00:10:27.075 ], 00:10:27.075 "product_name": "NVMe disk", 00:10:27.075 "block_size": 4096, 00:10:27.075 "num_blocks": 38912, 00:10:27.075 "uuid": "bfaeea78-b703-48e8-a098-0a265f7257d8", 00:10:27.075 "numa_id": -1, 00:10:27.075 "assigned_rate_limits": { 00:10:27.075 "rw_ios_per_sec": 0, 00:10:27.075 "rw_mbytes_per_sec": 0, 00:10:27.075 "r_mbytes_per_sec": 0, 00:10:27.075 "w_mbytes_per_sec": 0 00:10:27.075 }, 00:10:27.075 "claimed": false, 00:10:27.075 "zoned": false, 00:10:27.075 "supported_io_types": { 00:10:27.075 "read": true, 00:10:27.075 "write": true, 00:10:27.075 "unmap": true, 00:10:27.075 "flush": true, 00:10:27.075 "reset": true, 00:10:27.075 "nvme_admin": true, 00:10:27.075 "nvme_io": true, 00:10:27.075 "nvme_io_md": false, 00:10:27.075 "write_zeroes": true, 00:10:27.075 "zcopy": false, 00:10:27.075 "get_zone_info": false, 00:10:27.075 "zone_management": false, 00:10:27.075 "zone_append": false, 00:10:27.075 "compare": true, 00:10:27.075 "compare_and_write": true, 00:10:27.075 "abort": true, 00:10:27.075 "seek_hole": false, 00:10:27.075 "seek_data": false, 00:10:27.075 "copy": true, 00:10:27.075 "nvme_iov_md": false 00:10:27.075 }, 00:10:27.075 "memory_domains": [ 00:10:27.075 { 00:10:27.075 "dma_device_id": "system", 00:10:27.075 "dma_device_type": 1 00:10:27.075 } 00:10:27.075 ], 00:10:27.075 "driver_specific": { 00:10:27.075 "nvme": [ 00:10:27.075 { 00:10:27.075 "trid": { 00:10:27.075 "trtype": "TCP", 00:10:27.075 "adrfam": "IPv4", 00:10:27.075 "traddr": "10.0.0.3", 00:10:27.075 "trsvcid": "4420", 00:10:27.075 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:27.075 }, 00:10:27.075 "ctrlr_data": { 00:10:27.075 "cntlid": 1, 00:10:27.075 "vendor_id": "0x8086", 00:10:27.075 "model_number": "SPDK bdev Controller", 00:10:27.075 "serial_number": "SPDK0", 00:10:27.075 "firmware_revision": "25.01", 00:10:27.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:27.075 "oacs": { 00:10:27.075 "security": 0, 00:10:27.075 "format": 0, 00:10:27.075 "firmware": 0, 00:10:27.075 "ns_manage": 0 00:10:27.075 }, 00:10:27.075 "multi_ctrlr": true, 00:10:27.075 "ana_reporting": false 00:10:27.075 }, 00:10:27.075 "vs": { 00:10:27.075 "nvme_version": "1.3" 00:10:27.075 }, 00:10:27.075 "ns_data": { 00:10:27.075 "id": 1, 00:10:27.075 "can_share": true 00:10:27.075 } 00:10:27.075 } 00:10:27.075 ], 00:10:27.075 "mp_policy": "active_passive" 00:10:27.075 } 00:10:27.075 } 00:10:27.075 ] 00:10:27.075 04:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63969 00:10:27.075 04:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:27.075 04:00:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:27.075 Running I/O for 10 seconds... 00:10:28.008 Latency(us) 00:10:28.008 [2024-12-09T04:00:09.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.008 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:10:28.008 [2024-12-09T04:00:09.959Z] =================================================================================================================== 00:10:28.009 [2024-12-09T04:00:09.959Z] Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:10:28.009 00:10:28.940 04:00:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:28.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.940 Nvme0n1 : 2.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:10:28.940 [2024-12-09T04:00:10.890Z] =================================================================================================================== 00:10:28.940 [2024-12-09T04:00:10.890Z] Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:10:28.940 00:10:29.198 true 00:10:29.198 04:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:29.198 04:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:29.456 04:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:29.456 04:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:29.456 04:00:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63969 00:10:30.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.023 Nvme0n1 : 3.00 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:10:30.023 [2024-12-09T04:00:11.973Z] =================================================================================================================== 00:10:30.023 [2024-12-09T04:00:11.973Z] Total : 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:10:30.023 00:10:30.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.956 Nvme0n1 : 4.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:30.956 [2024-12-09T04:00:12.906Z] =================================================================================================================== 00:10:30.956 [2024-12-09T04:00:12.906Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:30.956 00:10:31.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.888 Nvme0n1 : 5.00 7035.80 27.48 0.00 0.00 0.00 0.00 0.00 00:10:31.888 [2024-12-09T04:00:13.838Z] =================================================================================================================== 00:10:31.888 [2024-12-09T04:00:13.838Z] Total : 7035.80 27.48 0.00 0.00 0.00 0.00 0.00 00:10:31.888 00:10:32.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.875 Nvme0n1 : 6.00 7006.17 27.37 0.00 0.00 0.00 0.00 0.00 00:10:32.875 [2024-12-09T04:00:14.825Z] =================================================================================================================== 00:10:32.875 [2024-12-09T04:00:14.825Z] Total : 7006.17 27.37 0.00 0.00 0.00 0.00 0.00 00:10:32.875 00:10:34.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.247 Nvme0n1 : 7.00 6823.71 26.66 0.00 0.00 0.00 0.00 0.00 00:10:34.247 [2024-12-09T04:00:16.197Z] =================================================================================================================== 00:10:34.247 [2024-12-09T04:00:16.197Z] Total : 6823.71 26.66 0.00 0.00 0.00 0.00 0.00 00:10:34.247 00:10:35.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.180 Nvme0n1 : 8.00 6764.50 26.42 0.00 0.00 0.00 0.00 0.00 00:10:35.180 [2024-12-09T04:00:17.130Z] =================================================================================================================== 00:10:35.180 [2024-12-09T04:00:17.130Z] Total : 6764.50 26.42 0.00 0.00 0.00 0.00 0.00 00:10:35.180 00:10:36.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.110 Nvme0n1 : 9.00 6774.89 26.46 0.00 0.00 0.00 0.00 0.00 00:10:36.110 [2024-12-09T04:00:18.060Z] =================================================================================================================== 00:10:36.110 [2024-12-09T04:00:18.060Z] Total : 6774.89 26.46 0.00 0.00 0.00 0.00 0.00 00:10:36.110 00:10:37.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.049 Nvme0n1 : 10.00 6770.50 26.45 0.00 0.00 0.00 0.00 0.00 00:10:37.049 [2024-12-09T04:00:18.999Z] =================================================================================================================== 00:10:37.049 [2024-12-09T04:00:18.999Z] Total : 6770.50 26.45 0.00 0.00 0.00 0.00 0.00 00:10:37.049 00:10:37.049 00:10:37.049 Latency(us) 00:10:37.049 [2024-12-09T04:00:18.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.049 Nvme0n1 : 10.01 6778.59 26.48 0.00 0.00 18877.64 10247.45 156333.15 00:10:37.049 [2024-12-09T04:00:18.999Z] =================================================================================================================== 00:10:37.049 [2024-12-09T04:00:18.999Z] Total : 6778.59 26.48 0.00 0.00 18877.64 10247.45 156333.15 00:10:37.049 { 00:10:37.049 "results": [ 00:10:37.049 { 00:10:37.049 "job": "Nvme0n1", 00:10:37.049 "core_mask": "0x2", 00:10:37.049 "workload": "randwrite", 00:10:37.049 "status": "finished", 00:10:37.049 "queue_depth": 128, 00:10:37.049 "io_size": 4096, 00:10:37.049 "runtime": 10.006954, 00:10:37.049 "iops": 6778.586171176564, 00:10:37.049 "mibps": 26.478852231158452, 00:10:37.049 "io_failed": 0, 00:10:37.049 "io_timeout": 0, 00:10:37.049 "avg_latency_us": 18877.635388943167, 00:10:37.049 "min_latency_us": 10247.447272727273, 00:10:37.049 "max_latency_us": 156333.14909090908 00:10:37.049 } 00:10:37.049 ], 00:10:37.049 "core_count": 1 00:10:37.049 } 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63946 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63946 ']' 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63946 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63946 00:10:37.049 killing process with pid 63946 00:10:37.049 Received shutdown signal, test time was about 10.000000 seconds 00:10:37.049 00:10:37.049 Latency(us) 00:10:37.049 [2024-12-09T04:00:18.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.049 [2024-12-09T04:00:18.999Z] =================================================================================================================== 00:10:37.049 [2024-12-09T04:00:18.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63946' 00:10:37.049 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63946 00:10:37.050 04:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63946 00:10:37.307 04:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:37.565 04:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:38.132 04:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:38.132 04:00:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63584 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63584 00:10:38.132 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63584 Killed "${NVMF_APP[@]}" "$@" 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=64103 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 64103 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 64103 ']' 00:10:38.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.132 04:00:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:38.391 [2024-12-09 04:00:20.125500] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:38.391 [2024-12-09 04:00:20.125631] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.391 [2024-12-09 04:00:20.275778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.391 [2024-12-09 04:00:20.338118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.649 [2024-12-09 04:00:20.338513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.649 [2024-12-09 04:00:20.338551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.649 [2024-12-09 04:00:20.338561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.649 [2024-12-09 04:00:20.338569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.649 [2024-12-09 04:00:20.339074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.649 [2024-12-09 04:00:20.414426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.215 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.215 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:39.215 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.215 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.215 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:39.473 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.473 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:39.732 [2024-12-09 04:00:21.452063] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:39.732 [2024-12-09 04:00:21.452766] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:39.732 [2024-12-09 04:00:21.453103] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:39.732 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:39.732 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bfaeea78-b703-48e8-a098-0a265f7257d8 00:10:39.732 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bfaeea78-b703-48e8-a098-0a265f7257d8 00:10:39.732 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.732 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:39.732 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.732 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.732 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:39.991 04:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bfaeea78-b703-48e8-a098-0a265f7257d8 -t 2000 00:10:40.250 [ 00:10:40.250 { 00:10:40.250 "name": "bfaeea78-b703-48e8-a098-0a265f7257d8", 00:10:40.250 "aliases": [ 00:10:40.250 "lvs/lvol" 00:10:40.250 ], 00:10:40.250 "product_name": "Logical Volume", 00:10:40.250 "block_size": 4096, 00:10:40.250 "num_blocks": 38912, 00:10:40.250 "uuid": "bfaeea78-b703-48e8-a098-0a265f7257d8", 00:10:40.250 "assigned_rate_limits": { 00:10:40.250 "rw_ios_per_sec": 0, 00:10:40.250 "rw_mbytes_per_sec": 0, 00:10:40.250 "r_mbytes_per_sec": 0, 00:10:40.250 "w_mbytes_per_sec": 0 00:10:40.250 }, 00:10:40.250 "claimed": false, 00:10:40.250 "zoned": false, 00:10:40.250 "supported_io_types": { 00:10:40.250 "read": true, 00:10:40.250 "write": true, 00:10:40.250 "unmap": true, 00:10:40.250 "flush": false, 00:10:40.250 "reset": true, 00:10:40.250 "nvme_admin": false, 00:10:40.251 "nvme_io": false, 00:10:40.251 "nvme_io_md": false, 00:10:40.251 "write_zeroes": true, 00:10:40.251 "zcopy": false, 00:10:40.251 "get_zone_info": false, 00:10:40.251 "zone_management": false, 00:10:40.251 "zone_append": false, 00:10:40.251 "compare": false, 00:10:40.251 "compare_and_write": false, 00:10:40.251 "abort": false, 00:10:40.251 "seek_hole": true, 00:10:40.251 "seek_data": true, 00:10:40.251 "copy": false, 00:10:40.251 "nvme_iov_md": false 00:10:40.251 }, 00:10:40.251 "driver_specific": { 00:10:40.251 "lvol": { 00:10:40.251 "lvol_store_uuid": "dc202859-f34f-4a9d-abb8-87f3c5a4c01f", 00:10:40.251 "base_bdev": "aio_bdev", 00:10:40.251 "thin_provision": false, 00:10:40.251 "num_allocated_clusters": 38, 00:10:40.251 "snapshot": false, 00:10:40.251 "clone": false, 00:10:40.251 "esnap_clone": false 00:10:40.251 } 00:10:40.251 } 00:10:40.251 } 00:10:40.251 ] 00:10:40.251 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:40.251 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:40.251 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:40.510 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:40.510 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:40.510 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:40.768 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:40.768 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:41.026 [2024-12-09 04:00:22.833831] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:41.026 04:00:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:41.283 request: 00:10:41.283 { 00:10:41.283 "uuid": "dc202859-f34f-4a9d-abb8-87f3c5a4c01f", 00:10:41.283 "method": "bdev_lvol_get_lvstores", 00:10:41.283 "req_id": 1 00:10:41.283 } 00:10:41.283 Got JSON-RPC error response 00:10:41.283 response: 00:10:41.283 { 00:10:41.283 "code": -19, 00:10:41.283 "message": "No such device" 00:10:41.283 } 00:10:41.283 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:41.283 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:41.283 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:41.283 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:41.283 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:41.540 aio_bdev 00:10:41.540 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bfaeea78-b703-48e8-a098-0a265f7257d8 00:10:41.540 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bfaeea78-b703-48e8-a098-0a265f7257d8 00:10:41.540 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.540 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:41.540 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.540 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.540 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:41.797 04:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bfaeea78-b703-48e8-a098-0a265f7257d8 -t 2000 00:10:42.061 [ 00:10:42.061 { 00:10:42.061 "name": "bfaeea78-b703-48e8-a098-0a265f7257d8", 00:10:42.061 "aliases": [ 00:10:42.061 "lvs/lvol" 00:10:42.061 ], 00:10:42.061 "product_name": "Logical Volume", 00:10:42.061 "block_size": 4096, 00:10:42.061 "num_blocks": 38912, 00:10:42.061 "uuid": "bfaeea78-b703-48e8-a098-0a265f7257d8", 00:10:42.061 "assigned_rate_limits": { 00:10:42.061 "rw_ios_per_sec": 0, 00:10:42.061 "rw_mbytes_per_sec": 0, 00:10:42.061 "r_mbytes_per_sec": 0, 00:10:42.061 "w_mbytes_per_sec": 0 00:10:42.061 }, 00:10:42.061 "claimed": false, 00:10:42.061 "zoned": false, 00:10:42.061 "supported_io_types": { 00:10:42.061 "read": true, 00:10:42.061 "write": true, 00:10:42.061 "unmap": true, 00:10:42.061 "flush": false, 00:10:42.061 "reset": true, 00:10:42.061 "nvme_admin": false, 00:10:42.061 "nvme_io": false, 00:10:42.061 "nvme_io_md": false, 00:10:42.061 "write_zeroes": true, 00:10:42.061 "zcopy": false, 00:10:42.061 "get_zone_info": false, 00:10:42.061 "zone_management": false, 00:10:42.061 "zone_append": false, 00:10:42.061 "compare": false, 00:10:42.061 "compare_and_write": false, 00:10:42.061 "abort": false, 00:10:42.061 "seek_hole": true, 00:10:42.061 "seek_data": true, 00:10:42.061 "copy": false, 00:10:42.061 "nvme_iov_md": false 00:10:42.061 }, 00:10:42.061 "driver_specific": { 00:10:42.061 "lvol": { 00:10:42.062 "lvol_store_uuid": "dc202859-f34f-4a9d-abb8-87f3c5a4c01f", 00:10:42.062 "base_bdev": "aio_bdev", 00:10:42.062 "thin_provision": false, 00:10:42.062 "num_allocated_clusters": 38, 00:10:42.062 "snapshot": false, 00:10:42.062 "clone": false, 00:10:42.062 "esnap_clone": false 00:10:42.062 } 00:10:42.062 } 00:10:42.062 } 00:10:42.062 ] 00:10:42.062 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:42.062 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:42.062 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:42.324 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:42.324 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:42.324 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:42.909 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:42.909 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bfaeea78-b703-48e8-a098-0a265f7257d8 00:10:42.909 04:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc202859-f34f-4a9d-abb8-87f3c5a4c01f 00:10:43.168 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:43.426 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:44.035 ************************************ 00:10:44.035 END TEST lvs_grow_dirty 00:10:44.035 ************************************ 00:10:44.035 00:10:44.035 real 0m21.550s 00:10:44.035 user 0m44.076s 00:10:44.035 sys 0m8.881s 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:44.035 nvmf_trace.0 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.035 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:44.293 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.293 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:44.293 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.293 04:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.293 rmmod nvme_tcp 00:10:44.293 rmmod nvme_fabrics 00:10:44.293 rmmod nvme_keyring 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 64103 ']' 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 64103 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 64103 ']' 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 64103 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64103 00:10:44.293 killing process with pid 64103 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64103' 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 64103 00:10:44.293 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 64103 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:44.550 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:44.551 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.551 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:44.551 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:44.551 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:44.551 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:44.551 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:44.808 00:10:44.808 real 0m43.204s 00:10:44.808 user 1m9.077s 00:10:44.808 sys 0m12.480s 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:44.808 ************************************ 00:10:44.808 END TEST nvmf_lvs_grow 00:10:44.808 ************************************ 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.808 ************************************ 00:10:44.808 START TEST nvmf_bdev_io_wait 00:10:44.808 ************************************ 00:10:44.808 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:45.067 * Looking for test storage... 00:10:45.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:45.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.067 --rc genhtml_branch_coverage=1 00:10:45.067 --rc genhtml_function_coverage=1 00:10:45.067 --rc genhtml_legend=1 00:10:45.067 --rc geninfo_all_blocks=1 00:10:45.067 --rc geninfo_unexecuted_blocks=1 00:10:45.067 00:10:45.067 ' 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:45.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.067 --rc genhtml_branch_coverage=1 00:10:45.067 --rc genhtml_function_coverage=1 00:10:45.067 --rc genhtml_legend=1 00:10:45.067 --rc geninfo_all_blocks=1 00:10:45.067 --rc geninfo_unexecuted_blocks=1 00:10:45.067 00:10:45.067 ' 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:45.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.067 --rc genhtml_branch_coverage=1 00:10:45.067 --rc genhtml_function_coverage=1 00:10:45.067 --rc genhtml_legend=1 00:10:45.067 --rc geninfo_all_blocks=1 00:10:45.067 --rc geninfo_unexecuted_blocks=1 00:10:45.067 00:10:45.067 ' 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:45.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.067 --rc genhtml_branch_coverage=1 00:10:45.067 --rc genhtml_function_coverage=1 00:10:45.067 --rc genhtml_legend=1 00:10:45.067 --rc geninfo_all_blocks=1 00:10:45.067 --rc geninfo_unexecuted_blocks=1 00:10:45.067 00:10:45.067 ' 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.067 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.068 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:45.068 Cannot find device "nvmf_init_br" 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:45.068 Cannot find device "nvmf_init_br2" 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:45.068 Cannot find device "nvmf_tgt_br" 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.068 Cannot find device "nvmf_tgt_br2" 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:45.068 Cannot find device "nvmf_init_br" 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:45.068 Cannot find device "nvmf_init_br2" 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:45.068 04:00:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:45.068 Cannot find device "nvmf_tgt_br" 00:10:45.068 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:45.068 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:45.327 Cannot find device "nvmf_tgt_br2" 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:45.327 Cannot find device "nvmf_br" 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:45.327 Cannot find device "nvmf_init_if" 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:45.327 Cannot find device "nvmf_init_if2" 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.327 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:45.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:45.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:45.328 00:10:45.328 --- 10.0.0.3 ping statistics --- 00:10:45.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.328 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:45.328 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:45.586 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:45.586 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:10:45.586 00:10:45.586 --- 10.0.0.4 ping statistics --- 00:10:45.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.586 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:45.586 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:45.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:45.587 00:10:45.587 --- 10.0.0.1 ping statistics --- 00:10:45.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.587 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:45.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:45.587 00:10:45.587 --- 10.0.0.2 ping statistics --- 00:10:45.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.587 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64484 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64484 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64484 ']' 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.587 04:00:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:45.587 [2024-12-09 04:00:27.381919] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:45.587 [2024-12-09 04:00:27.382016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.846 [2024-12-09 04:00:27.537650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.846 [2024-12-09 04:00:27.617884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.846 [2024-12-09 04:00:27.618099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.846 [2024-12-09 04:00:27.618316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.846 [2024-12-09 04:00:27.618470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.846 [2024-12-09 04:00:27.618512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.846 [2024-12-09 04:00:27.620151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.846 [2024-12-09 04:00:27.620320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.846 [2024-12-09 04:00:27.621219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.846 [2024-12-09 04:00:27.621243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.781 [2024-12-09 04:00:28.608326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.781 [2024-12-09 04:00:28.626570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.781 Malloc0 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.781 [2024-12-09 04:00:28.693420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64529 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64531 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64533 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.781 { 00:10:46.781 "params": { 00:10:46.781 "name": "Nvme$subsystem", 00:10:46.781 "trtype": "$TEST_TRANSPORT", 00:10:46.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.781 "adrfam": "ipv4", 00:10:46.781 "trsvcid": "$NVMF_PORT", 00:10:46.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.781 "hdgst": ${hdgst:-false}, 00:10:46.781 "ddgst": ${ddgst:-false} 00:10:46.781 }, 00:10:46.781 "method": "bdev_nvme_attach_controller" 00:10:46.781 } 00:10:46.781 EOF 00:10:46.781 )") 00:10:46.781 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.782 { 00:10:46.782 "params": { 00:10:46.782 "name": "Nvme$subsystem", 00:10:46.782 "trtype": "$TEST_TRANSPORT", 00:10:46.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.782 "adrfam": "ipv4", 00:10:46.782 "trsvcid": "$NVMF_PORT", 00:10:46.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.782 "hdgst": ${hdgst:-false}, 00:10:46.782 "ddgst": ${ddgst:-false} 00:10:46.782 }, 00:10:46.782 "method": "bdev_nvme_attach_controller" 00:10:46.782 } 00:10:46.782 EOF 00:10:46.782 )") 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.782 { 00:10:46.782 "params": { 00:10:46.782 "name": "Nvme$subsystem", 00:10:46.782 "trtype": "$TEST_TRANSPORT", 00:10:46.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.782 "adrfam": "ipv4", 00:10:46.782 "trsvcid": "$NVMF_PORT", 00:10:46.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.782 "hdgst": ${hdgst:-false}, 00:10:46.782 "ddgst": ${ddgst:-false} 00:10:46.782 }, 00:10:46.782 "method": "bdev_nvme_attach_controller" 00:10:46.782 } 00:10:46.782 EOF 00:10:46.782 )") 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64541 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.782 "params": { 00:10:46.782 "name": "Nvme1", 00:10:46.782 "trtype": "tcp", 00:10:46.782 "traddr": "10.0.0.3", 00:10:46.782 "adrfam": "ipv4", 00:10:46.782 "trsvcid": "4420", 00:10:46.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.782 "hdgst": false, 00:10:46.782 "ddgst": false 00:10:46.782 }, 00:10:46.782 "method": "bdev_nvme_attach_controller" 00:10:46.782 }' 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.782 { 00:10:46.782 "params": { 00:10:46.782 "name": "Nvme$subsystem", 00:10:46.782 "trtype": "$TEST_TRANSPORT", 00:10:46.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.782 "adrfam": "ipv4", 00:10:46.782 "trsvcid": "$NVMF_PORT", 00:10:46.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.782 "hdgst": ${hdgst:-false}, 00:10:46.782 "ddgst": ${ddgst:-false} 00:10:46.782 }, 00:10:46.782 "method": "bdev_nvme_attach_controller" 00:10:46.782 } 00:10:46.782 EOF 00:10:46.782 )") 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.782 "params": { 00:10:46.782 "name": "Nvme1", 00:10:46.782 "trtype": "tcp", 00:10:46.782 "traddr": "10.0.0.3", 00:10:46.782 "adrfam": "ipv4", 00:10:46.782 "trsvcid": "4420", 00:10:46.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.782 "hdgst": false, 00:10:46.782 "ddgst": false 00:10:46.782 }, 00:10:46.782 "method": "bdev_nvme_attach_controller" 00:10:46.782 }' 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.782 "params": { 00:10:46.782 "name": "Nvme1", 00:10:46.782 "trtype": "tcp", 00:10:46.782 "traddr": "10.0.0.3", 00:10:46.782 "adrfam": "ipv4", 00:10:46.782 "trsvcid": "4420", 00:10:46.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.782 "hdgst": false, 00:10:46.782 "ddgst": false 00:10:46.782 }, 00:10:46.782 "method": "bdev_nvme_attach_controller" 00:10:46.782 }' 00:10:46.782 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:47.041 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:47.041 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:47.041 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.041 "params": { 00:10:47.041 "name": "Nvme1", 00:10:47.041 "trtype": "tcp", 00:10:47.041 "traddr": "10.0.0.3", 00:10:47.041 "adrfam": "ipv4", 00:10:47.041 "trsvcid": "4420", 00:10:47.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.041 "hdgst": false, 00:10:47.041 "ddgst": false 00:10:47.041 }, 00:10:47.041 "method": "bdev_nvme_attach_controller" 00:10:47.041 }' 00:10:47.041 [2024-12-09 04:00:28.767374] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:47.041 [2024-12-09 04:00:28.767472] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:47.041 04:00:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64529 00:10:47.041 [2024-12-09 04:00:28.792765] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:47.041 [2024-12-09 04:00:28.792878] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:47.041 [2024-12-09 04:00:28.795289] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:47.041 [2024-12-09 04:00:28.795365] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:47.041 [2024-12-09 04:00:28.797069] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:47.041 [2024-12-09 04:00:28.797302] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:47.299 [2024-12-09 04:00:29.005349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.299 [2024-12-09 04:00:29.068318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:47.299 [2024-12-09 04:00:29.081324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.299 [2024-12-09 04:00:29.114199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.299 [2024-12-09 04:00:29.185063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:47.299 [2024-12-09 04:00:29.199247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.299 [2024-12-09 04:00:29.214888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.558 [2024-12-09 04:00:29.283910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:47.558 [2024-12-09 04:00:29.297797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.558 [2024-12-09 04:00:29.318877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.558 Running I/O for 1 seconds... 00:10:47.558 Running I/O for 1 seconds... 00:10:47.558 [2024-12-09 04:00:29.388084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:47.558 [2024-12-09 04:00:29.401966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.558 Running I/O for 1 seconds... 00:10:47.815 Running I/O for 1 seconds... 00:10:48.750 166256.00 IOPS, 649.44 MiB/s 00:10:48.750 Latency(us) 00:10:48.750 [2024-12-09T04:00:30.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.750 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:48.750 Nvme1n1 : 1.00 165923.36 648.14 0.00 0.00 767.42 368.64 1966.08 00:10:48.750 [2024-12-09T04:00:30.700Z] =================================================================================================================== 00:10:48.750 [2024-12-09T04:00:30.700Z] Total : 165923.36 648.14 0.00 0.00 767.42 368.64 1966.08 00:10:48.750 9869.00 IOPS, 38.55 MiB/s 00:10:48.750 Latency(us) 00:10:48.750 [2024-12-09T04:00:30.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.750 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:48.750 Nvme1n1 : 1.01 9907.00 38.70 0.00 0.00 12857.42 7179.17 17635.14 00:10:48.750 [2024-12-09T04:00:30.700Z] =================================================================================================================== 00:10:48.750 [2024-12-09T04:00:30.700Z] Total : 9907.00 38.70 0.00 0.00 12857.42 7179.17 17635.14 00:10:48.750 8871.00 IOPS, 34.65 MiB/s 00:10:48.750 Latency(us) 00:10:48.750 [2024-12-09T04:00:30.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.750 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:48.750 Nvme1n1 : 1.01 8932.05 34.89 0.00 0.00 14263.61 7387.69 23712.12 00:10:48.750 [2024-12-09T04:00:30.700Z] =================================================================================================================== 00:10:48.750 [2024-12-09T04:00:30.700Z] Total : 8932.05 34.89 0.00 0.00 14263.61 7387.69 23712.12 00:10:48.750 8331.00 IOPS, 32.54 MiB/s 00:10:48.750 Latency(us) 00:10:48.750 [2024-12-09T04:00:30.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.750 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:48.750 Nvme1n1 : 1.01 8398.64 32.81 0.00 0.00 15169.51 2323.55 22997.18 00:10:48.750 [2024-12-09T04:00:30.700Z] =================================================================================================================== 00:10:48.750 [2024-12-09T04:00:30.700Z] Total : 8398.64 32.81 0.00 0.00 15169.51 2323.55 22997.18 00:10:49.008 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64531 00:10:49.008 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64533 00:10:49.008 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64541 00:10:49.008 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.008 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.008 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.008 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.008 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.009 rmmod nvme_tcp 00:10:49.009 rmmod nvme_fabrics 00:10:49.009 rmmod nvme_keyring 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64484 ']' 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64484 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64484 ']' 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64484 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64484 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64484' 00:10:49.009 killing process with pid 64484 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64484 00:10:49.009 04:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64484 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:49.267 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:49.527 00:10:49.527 real 0m4.726s 00:10:49.527 user 0m18.852s 00:10:49.527 sys 0m2.574s 00:10:49.527 ************************************ 00:10:49.527 END TEST nvmf_bdev_io_wait 00:10:49.527 ************************************ 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.527 ************************************ 00:10:49.527 START TEST nvmf_queue_depth 00:10:49.527 ************************************ 00:10:49.527 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:49.786 * Looking for test storage... 00:10:49.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.786 --rc genhtml_branch_coverage=1 00:10:49.786 --rc genhtml_function_coverage=1 00:10:49.786 --rc genhtml_legend=1 00:10:49.786 --rc geninfo_all_blocks=1 00:10:49.786 --rc geninfo_unexecuted_blocks=1 00:10:49.786 00:10:49.786 ' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.786 --rc genhtml_branch_coverage=1 00:10:49.786 --rc genhtml_function_coverage=1 00:10:49.786 --rc genhtml_legend=1 00:10:49.786 --rc geninfo_all_blocks=1 00:10:49.786 --rc geninfo_unexecuted_blocks=1 00:10:49.786 00:10:49.786 ' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.786 --rc genhtml_branch_coverage=1 00:10:49.786 --rc genhtml_function_coverage=1 00:10:49.786 --rc genhtml_legend=1 00:10:49.786 --rc geninfo_all_blocks=1 00:10:49.786 --rc geninfo_unexecuted_blocks=1 00:10:49.786 00:10:49.786 ' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.786 --rc genhtml_branch_coverage=1 00:10:49.786 --rc genhtml_function_coverage=1 00:10:49.786 --rc genhtml_legend=1 00:10:49.786 --rc geninfo_all_blocks=1 00:10:49.786 --rc geninfo_unexecuted_blocks=1 00:10:49.786 00:10:49.786 ' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.786 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.786 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:49.787 Cannot find device "nvmf_init_br" 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:49.787 Cannot find device "nvmf_init_br2" 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:49.787 Cannot find device "nvmf_tgt_br" 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:49.787 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.044 Cannot find device "nvmf_tgt_br2" 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:50.044 Cannot find device "nvmf_init_br" 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:50.044 Cannot find device "nvmf_init_br2" 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:50.044 Cannot find device "nvmf_tgt_br" 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:50.044 Cannot find device "nvmf_tgt_br2" 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:50.044 Cannot find device "nvmf_br" 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:50.044 Cannot find device "nvmf_init_if" 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:50.044 Cannot find device "nvmf_init_if2" 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:50.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:50.044 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:50.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:50.045 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:50.302 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:50.302 04:00:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:50.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:50.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:50.302 00:10:50.302 --- 10.0.0.3 ping statistics --- 00:10:50.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.302 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:50.302 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:50.302 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:50.302 00:10:50.302 --- 10.0.0.4 ping statistics --- 00:10:50.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.302 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:50.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:50.302 00:10:50.302 --- 10.0.0.1 ping statistics --- 00:10:50.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.302 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:50.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:50.302 00:10:50.302 --- 10.0.0.2 ping statistics --- 00:10:50.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.302 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64820 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64820 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64820 ']' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.302 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.302 [2024-12-09 04:00:32.201280] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:50.302 [2024-12-09 04:00:32.201375] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.559 [2024-12-09 04:00:32.361987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.559 [2024-12-09 04:00:32.425457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.559 [2024-12-09 04:00:32.425547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.559 [2024-12-09 04:00:32.425582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.559 [2024-12-09 04:00:32.425593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.559 [2024-12-09 04:00:32.425602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.559 [2024-12-09 04:00:32.426126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.817 [2024-12-09 04:00:32.506315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.817 [2024-12-09 04:00:32.645828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.817 Malloc0 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:50.817 [2024-12-09 04:00:32.706301] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64845 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64845 /var/tmp/bdevperf.sock 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64845 ']' 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.817 04:00:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.075 [2024-12-09 04:00:32.773778] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:10:51.076 [2024-12-09 04:00:32.773886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64845 ] 00:10:51.076 [2024-12-09 04:00:32.929293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.076 [2024-12-09 04:00:33.005496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.333 [2024-12-09 04:00:33.088716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.333 04:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.333 04:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:51.333 04:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:51.333 04:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.333 04:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.333 NVMe0n1 00:10:51.333 04:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.333 04:00:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:51.591 Running I/O for 10 seconds... 00:10:53.456 7168.00 IOPS, 28.00 MiB/s [2024-12-09T04:00:36.780Z] 7670.00 IOPS, 29.96 MiB/s [2024-12-09T04:00:37.714Z] 7562.67 IOPS, 29.54 MiB/s [2024-12-09T04:00:38.648Z] 7841.25 IOPS, 30.63 MiB/s [2024-12-09T04:00:39.578Z] 7947.60 IOPS, 31.05 MiB/s [2024-12-09T04:00:40.511Z] 8038.83 IOPS, 31.40 MiB/s [2024-12-09T04:00:41.444Z] 8108.43 IOPS, 31.67 MiB/s [2024-12-09T04:00:42.820Z] 8264.25 IOPS, 32.28 MiB/s [2024-12-09T04:00:43.757Z] 8429.56 IOPS, 32.93 MiB/s [2024-12-09T04:00:43.757Z] 8529.90 IOPS, 33.32 MiB/s 00:11:01.807 Latency(us) 00:11:01.807 [2024-12-09T04:00:43.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.807 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:01.807 Verification LBA range: start 0x0 length 0x4000 00:11:01.807 NVMe0n1 : 10.06 8573.12 33.49 0.00 0.00 118953.93 11319.85 105810.85 00:11:01.807 [2024-12-09T04:00:43.757Z] =================================================================================================================== 00:11:01.807 [2024-12-09T04:00:43.757Z] Total : 8573.12 33.49 0.00 0.00 118953.93 11319.85 105810.85 00:11:01.807 { 00:11:01.807 "results": [ 00:11:01.807 { 00:11:01.807 "job": "NVMe0n1", 00:11:01.807 "core_mask": "0x1", 00:11:01.807 "workload": "verify", 00:11:01.807 "status": "finished", 00:11:01.807 "verify_range": { 00:11:01.807 "start": 0, 00:11:01.807 "length": 16384 00:11:01.807 }, 00:11:01.807 "queue_depth": 1024, 00:11:01.807 "io_size": 4096, 00:11:01.807 "runtime": 10.063434, 00:11:01.807 "iops": 8573.117287796591, 00:11:01.807 "mibps": 33.48873940545543, 00:11:01.807 "io_failed": 0, 00:11:01.807 "io_timeout": 0, 00:11:01.807 "avg_latency_us": 118953.92971540266, 00:11:01.807 "min_latency_us": 11319.854545454546, 00:11:01.807 "max_latency_us": 105810.8509090909 00:11:01.807 } 00:11:01.807 ], 00:11:01.807 "core_count": 1 00:11:01.807 } 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64845 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64845 ']' 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64845 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64845 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.807 killing process with pid 64845 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64845' 00:11:01.807 Received shutdown signal, test time was about 10.000000 seconds 00:11:01.807 00:11:01.807 Latency(us) 00:11:01.807 [2024-12-09T04:00:43.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.807 [2024-12-09T04:00:43.757Z] =================================================================================================================== 00:11:01.807 [2024-12-09T04:00:43.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64845 00:11:01.807 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64845 00:11:02.065 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:02.065 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:02.065 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.065 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:02.065 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.065 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:02.065 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.066 rmmod nvme_tcp 00:11:02.066 rmmod nvme_fabrics 00:11:02.066 rmmod nvme_keyring 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64820 ']' 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64820 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64820 ']' 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64820 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64820 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:02.066 killing process with pid 64820 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64820' 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64820 00:11:02.066 04:00:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64820 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:02.324 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.583 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:02.841 00:11:02.841 real 0m13.079s 00:11:02.841 user 0m21.656s 00:11:02.841 sys 0m2.525s 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.841 ************************************ 00:11:02.841 END TEST nvmf_queue_depth 00:11:02.841 ************************************ 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.841 ************************************ 00:11:02.841 START TEST nvmf_target_multipath 00:11:02.841 ************************************ 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:02.841 * Looking for test storage... 00:11:02.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.841 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.101 --rc genhtml_branch_coverage=1 00:11:03.101 --rc genhtml_function_coverage=1 00:11:03.101 --rc genhtml_legend=1 00:11:03.101 --rc geninfo_all_blocks=1 00:11:03.101 --rc geninfo_unexecuted_blocks=1 00:11:03.101 00:11:03.101 ' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.101 --rc genhtml_branch_coverage=1 00:11:03.101 --rc genhtml_function_coverage=1 00:11:03.101 --rc genhtml_legend=1 00:11:03.101 --rc geninfo_all_blocks=1 00:11:03.101 --rc geninfo_unexecuted_blocks=1 00:11:03.101 00:11:03.101 ' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.101 --rc genhtml_branch_coverage=1 00:11:03.101 --rc genhtml_function_coverage=1 00:11:03.101 --rc genhtml_legend=1 00:11:03.101 --rc geninfo_all_blocks=1 00:11:03.101 --rc geninfo_unexecuted_blocks=1 00:11:03.101 00:11:03.101 ' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.101 --rc genhtml_branch_coverage=1 00:11:03.101 --rc genhtml_function_coverage=1 00:11:03.101 --rc genhtml_legend=1 00:11:03.101 --rc geninfo_all_blocks=1 00:11:03.101 --rc geninfo_unexecuted_blocks=1 00:11:03.101 00:11:03.101 ' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.101 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.101 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:03.102 Cannot find device "nvmf_init_br" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:03.102 Cannot find device "nvmf_init_br2" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:03.102 Cannot find device "nvmf_tgt_br" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.102 Cannot find device "nvmf_tgt_br2" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:03.102 Cannot find device "nvmf_init_br" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:03.102 Cannot find device "nvmf_init_br2" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:03.102 Cannot find device "nvmf_tgt_br" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:03.102 Cannot find device "nvmf_tgt_br2" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:03.102 Cannot find device "nvmf_br" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:03.102 Cannot find device "nvmf_init_if" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:03.102 Cannot find device "nvmf_init_if2" 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:03.102 04:00:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.102 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.102 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:03.102 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.102 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:03.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:03.360 00:11:03.360 --- 10.0.0.3 ping statistics --- 00:11:03.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.360 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:03.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:03.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:11:03.360 00:11:03.360 --- 10.0.0.4 ping statistics --- 00:11:03.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.360 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:03.360 00:11:03.360 --- 10.0.0.1 ping statistics --- 00:11:03.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.360 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:03.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:11:03.360 00:11:03.360 --- 10.0.0.2 ping statistics --- 00:11:03.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.360 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.360 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65216 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65216 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65216 ']' 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.361 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:03.619 [2024-12-09 04:00:45.369630] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:11:03.619 [2024-12-09 04:00:45.369740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.619 [2024-12-09 04:00:45.519829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.878 [2024-12-09 04:00:45.580528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.878 [2024-12-09 04:00:45.580599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.878 [2024-12-09 04:00:45.580611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.878 [2024-12-09 04:00:45.580620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.878 [2024-12-09 04:00:45.580628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.878 [2024-12-09 04:00:45.582112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.878 [2024-12-09 04:00:45.582258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.878 [2024-12-09 04:00:45.582328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.878 [2024-12-09 04:00:45.582343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.878 [2024-12-09 04:00:45.659340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.878 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.878 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:03.878 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.878 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.878 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:03.878 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.878 04:00:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.445 [2024-12-09 04:00:46.129767] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.445 04:00:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:04.702 Malloc0 00:11:04.702 04:00:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:04.960 04:00:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.218 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:05.476 [2024-12-09 04:00:47.396986] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:05.476 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:05.734 [2024-12-09 04:00:47.657359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:05.734 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:05.993 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:06.251 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.251 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.251 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.251 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:06.252 04:00:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:08.154 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:08.154 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65304 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:08.155 04:00:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:08.155 [global] 00:11:08.155 thread=1 00:11:08.155 invalidate=1 00:11:08.155 rw=randrw 00:11:08.155 time_based=1 00:11:08.155 runtime=6 00:11:08.155 ioengine=libaio 00:11:08.155 direct=1 00:11:08.155 bs=4096 00:11:08.155 iodepth=128 00:11:08.155 norandommap=0 00:11:08.155 numjobs=1 00:11:08.155 00:11:08.155 verify_dump=1 00:11:08.155 verify_backlog=512 00:11:08.155 verify_state_save=0 00:11:08.155 do_verify=1 00:11:08.155 verify=crc32c-intel 00:11:08.155 [job0] 00:11:08.155 filename=/dev/nvme0n1 00:11:08.155 Could not set queue depth (nvme0n1) 00:11:08.413 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.413 fio-3.35 00:11:08.413 Starting 1 thread 00:11:09.348 04:00:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:09.605 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:09.863 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:10.121 04:00:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:10.380 04:00:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65304 00:11:14.596 00:11:14.596 job0: (groupid=0, jobs=1): err= 0: pid=65325: Mon Dec 9 04:00:56 2024 00:11:14.596 read: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(247MiB/6003msec) 00:11:14.596 slat (usec): min=8, max=7906, avg=54.99, stdev=213.33 00:11:14.596 clat (usec): min=1351, max=18348, avg=8224.19, stdev=1389.43 00:11:14.596 lat (usec): min=1426, max=18357, avg=8279.19, stdev=1392.50 00:11:14.597 clat percentiles (usec): 00:11:14.597 | 1.00th=[ 4359], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7504], 00:11:14.597 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:11:14.597 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11338], 00:11:14.597 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13566], 99.95th=[13960], 00:11:14.597 | 99.99th=[18220] 00:11:14.597 bw ( KiB/s): min= 3000, max=30104, per=52.27%, avg=22029.18, stdev=7955.43, samples=11 00:11:14.597 iops : min= 750, max= 7526, avg=5507.27, stdev=1988.85, samples=11 00:11:14.597 write: IOPS=6416, BW=25.1MiB/s (26.3MB/s)(131MiB/5226msec); 0 zone resets 00:11:14.597 slat (usec): min=16, max=3387, avg=64.90, stdev=155.11 00:11:14.597 clat (usec): min=1119, max=17109, avg=7195.01, stdev=1232.85 00:11:14.597 lat (usec): min=1151, max=18047, avg=7259.91, stdev=1237.02 00:11:14.597 clat percentiles (usec): 00:11:14.597 | 1.00th=[ 3294], 5.00th=[ 4424], 10.00th=[ 5997], 20.00th=[ 6652], 00:11:14.597 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:11:14.597 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8586], 00:11:14.597 | 99.00th=[10814], 99.50th=[11469], 99.90th=[12911], 99.95th=[13435], 00:11:14.597 | 99.99th=[16712] 00:11:14.597 bw ( KiB/s): min= 2960, max=30344, per=86.05%, avg=22085.91, stdev=7833.72, samples=11 00:11:14.597 iops : min= 740, max= 7586, avg=5521.45, stdev=1958.43, samples=11 00:11:14.597 lat (msec) : 2=0.03%, 4=1.56%, 10=92.95%, 20=5.46% 00:11:14.597 cpu : usr=6.16%, sys=22.26%, ctx=5749, majf=0, minf=127 00:11:14.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:14.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.597 issued rwts: total=63251,33532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.597 00:11:14.597 Run status group 0 (all jobs): 00:11:14.597 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=247MiB (259MB), run=6003-6003msec 00:11:14.597 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=131MiB (137MB), run=5226-5226msec 00:11:14.597 00:11:14.597 Disk stats (read/write): 00:11:14.597 nvme0n1: ios=62211/33020, merge=0/0, ticks=492401/223198, in_queue=715599, util=98.62% 00:11:14.597 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:14.855 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:15.114 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:15.115 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:15.115 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:15.115 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65405 00:11:15.115 04:00:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:15.115 [global] 00:11:15.115 thread=1 00:11:15.115 invalidate=1 00:11:15.115 rw=randrw 00:11:15.115 time_based=1 00:11:15.115 runtime=6 00:11:15.115 ioengine=libaio 00:11:15.115 direct=1 00:11:15.115 bs=4096 00:11:15.115 iodepth=128 00:11:15.115 norandommap=0 00:11:15.115 numjobs=1 00:11:15.115 00:11:15.115 verify_dump=1 00:11:15.115 verify_backlog=512 00:11:15.115 verify_state_save=0 00:11:15.115 do_verify=1 00:11:15.115 verify=crc32c-intel 00:11:15.115 [job0] 00:11:15.115 filename=/dev/nvme0n1 00:11:15.115 Could not set queue depth (nvme0n1) 00:11:15.115 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.115 fio-3.35 00:11:15.115 Starting 1 thread 00:11:16.051 04:00:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:16.311 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:16.570 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:17.137 04:00:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:17.395 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:17.396 04:00:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65405 00:11:21.581 00:11:21.581 job0: (groupid=0, jobs=1): err= 0: pid=65426: Mon Dec 9 04:01:03 2024 00:11:21.581 read: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(267MiB/6006msec) 00:11:21.581 slat (usec): min=4, max=6444, avg=43.58, stdev=189.21 00:11:21.581 clat (usec): min=717, max=15252, avg=7679.12, stdev=1929.92 00:11:21.581 lat (usec): min=728, max=15270, avg=7722.70, stdev=1945.11 00:11:21.581 clat percentiles (usec): 00:11:21.581 | 1.00th=[ 2900], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 5997], 00:11:21.581 | 30.00th=[ 7046], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8291], 00:11:21.581 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10552], 00:11:21.581 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14222], 99.95th=[14353], 00:11:21.581 | 99.99th=[14877] 00:11:21.581 bw ( KiB/s): min=16048, max=36814, per=54.38%, avg=24754.64, stdev=7000.34, samples=11 00:11:21.581 iops : min= 4012, max= 9203, avg=6188.55, stdev=1749.92, samples=11 00:11:21.581 write: IOPS=6843, BW=26.7MiB/s (28.0MB/s)(144MiB/5382msec); 0 zone resets 00:11:21.581 slat (usec): min=14, max=2131, avg=54.35, stdev=126.06 00:11:21.581 clat (usec): min=438, max=15070, avg=6454.33, stdev=1823.02 00:11:21.581 lat (usec): min=464, max=15110, avg=6508.68, stdev=1835.35 00:11:21.581 clat percentiles (usec): 00:11:21.581 | 1.00th=[ 2573], 5.00th=[ 3392], 10.00th=[ 3851], 20.00th=[ 4490], 00:11:21.581 | 30.00th=[ 5211], 40.00th=[ 6456], 50.00th=[ 7046], 60.00th=[ 7373], 00:11:21.581 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:11:21.581 | 99.00th=[10814], 99.50th=[11600], 99.90th=[13304], 99.95th=[13960], 00:11:21.581 | 99.99th=[15008] 00:11:21.581 bw ( KiB/s): min=16368, max=37700, per=90.37%, avg=24738.00, stdev=6818.62, samples=11 00:11:21.581 iops : min= 4092, max= 9425, avg=6184.45, stdev=1704.67, samples=11 00:11:21.581 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:21.581 lat (msec) : 2=0.25%, 4=6.24%, 10=89.17%, 20=4.32% 00:11:21.581 cpu : usr=6.16%, sys=25.37%, ctx=6064, majf=0, minf=90 00:11:21.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:21.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.581 issued rwts: total=68346,36830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.581 00:11:21.581 Run status group 0 (all jobs): 00:11:21.581 READ: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=267MiB (280MB), run=6006-6006msec 00:11:21.581 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=144MiB (151MB), run=5382-5382msec 00:11:21.581 00:11:21.581 Disk stats (read/write): 00:11:21.581 nvme0n1: ios=67512/36087, merge=0/0, ticks=491245/214232, in_queue=705477, util=98.61% 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:11:21.581 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.840 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.840 rmmod nvme_tcp 00:11:21.840 rmmod nvme_fabrics 00:11:21.840 rmmod nvme_keyring 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65216 ']' 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65216 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65216 ']' 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65216 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65216 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.098 killing process with pid 65216 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65216' 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65216 00:11:22.098 04:01:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65216 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:22.357 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:22.617 00:11:22.617 real 0m19.811s 00:11:22.617 user 1m13.181s 00:11:22.617 sys 0m10.209s 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:22.617 ************************************ 00:11:22.617 END TEST nvmf_target_multipath 00:11:22.617 ************************************ 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.617 ************************************ 00:11:22.617 START TEST nvmf_zcopy 00:11:22.617 ************************************ 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:22.617 * Looking for test storage... 00:11:22.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.617 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.876 --rc genhtml_branch_coverage=1 00:11:22.876 --rc genhtml_function_coverage=1 00:11:22.876 --rc genhtml_legend=1 00:11:22.876 --rc geninfo_all_blocks=1 00:11:22.876 --rc geninfo_unexecuted_blocks=1 00:11:22.876 00:11:22.876 ' 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.876 --rc genhtml_branch_coverage=1 00:11:22.876 --rc genhtml_function_coverage=1 00:11:22.876 --rc genhtml_legend=1 00:11:22.876 --rc geninfo_all_blocks=1 00:11:22.876 --rc geninfo_unexecuted_blocks=1 00:11:22.876 00:11:22.876 ' 00:11:22.876 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.876 --rc genhtml_branch_coverage=1 00:11:22.876 --rc genhtml_function_coverage=1 00:11:22.877 --rc genhtml_legend=1 00:11:22.877 --rc geninfo_all_blocks=1 00:11:22.877 --rc geninfo_unexecuted_blocks=1 00:11:22.877 00:11:22.877 ' 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.877 --rc genhtml_branch_coverage=1 00:11:22.877 --rc genhtml_function_coverage=1 00:11:22.877 --rc genhtml_legend=1 00:11:22.877 --rc geninfo_all_blocks=1 00:11:22.877 --rc geninfo_unexecuted_blocks=1 00:11:22.877 00:11:22.877 ' 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.877 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:22.877 Cannot find device "nvmf_init_br" 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:22.877 Cannot find device "nvmf_init_br2" 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:22.877 Cannot find device "nvmf_tgt_br" 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:22.877 Cannot find device "nvmf_tgt_br2" 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:22.877 Cannot find device "nvmf_init_br" 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:22.877 Cannot find device "nvmf_init_br2" 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:22.877 Cannot find device "nvmf_tgt_br" 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:22.877 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:22.877 Cannot find device "nvmf_tgt_br2" 00:11:22.878 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:22.878 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:22.878 Cannot find device "nvmf_br" 00:11:22.878 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:22.878 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:22.878 Cannot find device "nvmf_init_if" 00:11:22.878 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:22.878 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:22.878 Cannot find device "nvmf_init_if2" 00:11:22.878 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:22.878 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:23.136 04:01:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:23.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:23.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:11:23.136 00:11:23.136 --- 10.0.0.3 ping statistics --- 00:11:23.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.136 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:11:23.136 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:23.136 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:23.136 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:11:23.136 00:11:23.136 --- 10.0.0.4 ping statistics --- 00:11:23.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.137 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:23.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:23.137 00:11:23.137 --- 10.0.0.1 ping statistics --- 00:11:23.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.137 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:23.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:23.137 00:11:23.137 --- 10.0.0.2 ping statistics --- 00:11:23.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.137 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:23.137 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65732 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65732 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65732 ']' 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.395 04:01:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:23.395 [2024-12-09 04:01:05.150878] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:11:23.395 [2024-12-09 04:01:05.150991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.395 [2024-12-09 04:01:05.306903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.685 [2024-12-09 04:01:05.395827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.685 [2024-12-09 04:01:05.395904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.685 [2024-12-09 04:01:05.395919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.685 [2024-12-09 04:01:05.395931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.685 [2024-12-09 04:01:05.395941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.685 [2024-12-09 04:01:05.396488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.685 [2024-12-09 04:01:05.477843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.261 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.261 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:24.261 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.261 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.261 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.518 [2024-12-09 04:01:06.251806] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.518 [2024-12-09 04:01:06.267898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.518 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.519 malloc0 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.519 { 00:11:24.519 "params": { 00:11:24.519 "name": "Nvme$subsystem", 00:11:24.519 "trtype": "$TEST_TRANSPORT", 00:11:24.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.519 "adrfam": "ipv4", 00:11:24.519 "trsvcid": "$NVMF_PORT", 00:11:24.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.519 "hdgst": ${hdgst:-false}, 00:11:24.519 "ddgst": ${ddgst:-false} 00:11:24.519 }, 00:11:24.519 "method": "bdev_nvme_attach_controller" 00:11:24.519 } 00:11:24.519 EOF 00:11:24.519 )") 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:24.519 04:01:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.519 "params": { 00:11:24.519 "name": "Nvme1", 00:11:24.519 "trtype": "tcp", 00:11:24.519 "traddr": "10.0.0.3", 00:11:24.519 "adrfam": "ipv4", 00:11:24.519 "trsvcid": "4420", 00:11:24.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.519 "hdgst": false, 00:11:24.519 "ddgst": false 00:11:24.519 }, 00:11:24.519 "method": "bdev_nvme_attach_controller" 00:11:24.519 }' 00:11:24.519 [2024-12-09 04:01:06.370125] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:11:24.519 [2024-12-09 04:01:06.370237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65771 ] 00:11:24.776 [2024-12-09 04:01:06.521413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.777 [2024-12-09 04:01:06.600072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.777 [2024-12-09 04:01:06.691641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.094 Running I/O for 10 seconds... 00:11:26.961 6275.00 IOPS, 49.02 MiB/s [2024-12-09T04:01:09.845Z] 6308.00 IOPS, 49.28 MiB/s [2024-12-09T04:01:11.217Z] 6299.33 IOPS, 49.21 MiB/s [2024-12-09T04:01:12.149Z] 6272.00 IOPS, 49.00 MiB/s [2024-12-09T04:01:13.079Z] 6256.60 IOPS, 48.88 MiB/s [2024-12-09T04:01:14.011Z] 6309.33 IOPS, 49.29 MiB/s [2024-12-09T04:01:14.941Z] 6339.14 IOPS, 49.52 MiB/s [2024-12-09T04:01:15.873Z] 6353.00 IOPS, 49.63 MiB/s [2024-12-09T04:01:17.245Z] 6364.22 IOPS, 49.72 MiB/s [2024-12-09T04:01:17.245Z] 6371.40 IOPS, 49.78 MiB/s 00:11:35.295 Latency(us) 00:11:35.295 [2024-12-09T04:01:17.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.295 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:35.295 Verification LBA range: start 0x0 length 0x1000 00:11:35.295 Nvme1n1 : 10.01 6374.39 49.80 0.00 0.00 20017.41 283.00 31457.28 00:11:35.295 [2024-12-09T04:01:17.245Z] =================================================================================================================== 00:11:35.295 [2024-12-09T04:01:17.245Z] Total : 6374.39 49.80 0.00 0.00 20017.41 283.00 31457.28 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65889 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.295 { 00:11:35.295 "params": { 00:11:35.295 "name": "Nvme$subsystem", 00:11:35.295 "trtype": "$TEST_TRANSPORT", 00:11:35.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.295 "adrfam": "ipv4", 00:11:35.295 "trsvcid": "$NVMF_PORT", 00:11:35.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.295 "hdgst": ${hdgst:-false}, 00:11:35.295 "ddgst": ${ddgst:-false} 00:11:35.295 }, 00:11:35.295 "method": "bdev_nvme_attach_controller" 00:11:35.295 } 00:11:35.295 EOF 00:11:35.295 )") 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:35.295 [2024-12-09 04:01:17.111927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.111986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:35.295 04:01:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.295 "params": { 00:11:35.295 "name": "Nvme1", 00:11:35.295 "trtype": "tcp", 00:11:35.295 "traddr": "10.0.0.3", 00:11:35.295 "adrfam": "ipv4", 00:11:35.295 "trsvcid": "4420", 00:11:35.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.295 "hdgst": false, 00:11:35.295 "ddgst": false 00:11:35.295 }, 00:11:35.295 "method": "bdev_nvme_attach_controller" 00:11:35.295 }' 00:11:35.295 [2024-12-09 04:01:17.123869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.123910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.131861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.131901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.139863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.139887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.147864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.147887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.151787] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:11:35.295 [2024-12-09 04:01:17.151886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65889 ] 00:11:35.295 [2024-12-09 04:01:17.159865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.159887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.167875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.167915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.175870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.175892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.183872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.183911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.191872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.191910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.203877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.203915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.211876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.211899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.223882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.223921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.231883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.231906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.295 [2024-12-09 04:01:17.239885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.295 [2024-12-09 04:01:17.239907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.247903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.247925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.255893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.255931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.263897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.263935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.271930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.271976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.279953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.279998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.287943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.287987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.294700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.554 [2024-12-09 04:01:17.299945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.299992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.311956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.312008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.319922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.319953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.327932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.327980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.335966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.336011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.343954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.344001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.351916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.351960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.359769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.554 [2024-12-09 04:01:17.363926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.363986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.371934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.371964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.383938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.554 [2024-12-09 04:01:17.383985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.554 [2024-12-09 04:01:17.391939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.391983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.399950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.399996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.407935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.407980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.415935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.415980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.427950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.428000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.435962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.436024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.443962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.444013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.448378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:35.555 [2024-12-09 04:01:17.451976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.452025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.459967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.460016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.471970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.472021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.479969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.480017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.487970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.488017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.555 [2024-12-09 04:01:17.495989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.555 [2024-12-09 04:01:17.496035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.822 [2024-12-09 04:01:17.503971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.822 [2024-12-09 04:01:17.504017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.822 [2024-12-09 04:01:17.515999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.822 [2024-12-09 04:01:17.516053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.822 [2024-12-09 04:01:17.523987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.822 [2024-12-09 04:01:17.524036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.822 [2024-12-09 04:01:17.536003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.822 [2024-12-09 04:01:17.536054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.544001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.544050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.552057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.552110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.560031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.560079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.568022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.568063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.576038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.576084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.584067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.584097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 Running I/O for 5 seconds... 00:11:35.823 [2024-12-09 04:01:17.595828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.595880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.605393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.605430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.616633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.616686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.626681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.626734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.637203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.637282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.652898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.652965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.668964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.669020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.686859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.686911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.696716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.696767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.707214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.707276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.724964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.725015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.740104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.740156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.748783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.748835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.823 [2024-12-09 04:01:17.760695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.823 [2024-12-09 04:01:17.760747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.770634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.770685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.785251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.785326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.794761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.794808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.807870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.807938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.818106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.818157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.831919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.831973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.841391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.841433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.854958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.855011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.865022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.865073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.879237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.879275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.891102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.891155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.907929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.907980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.924256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.924308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.942577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.942629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.956905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.956958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.966413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.966465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.980154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.980237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:17.989284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:17.989404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:18.002063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:18.002115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.081 [2024-12-09 04:01:18.018803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.081 [2024-12-09 04:01:18.018855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.035676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.035728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.045419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.045476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.058928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.058981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.067963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.068014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.081978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.082030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.091344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.091397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.104704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.104758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.121334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.121409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.136798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.346 [2024-12-09 04:01:18.136849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.346 [2024-12-09 04:01:18.146090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.146141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.160581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.160651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.169974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.170024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.183739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.183792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.193425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.193463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.207849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.207901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.224792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.224860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.233533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.233595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.248458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.248509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.264755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.264808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.347 [2024-12-09 04:01:18.281266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.347 [2024-12-09 04:01:18.281328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.604 [2024-12-09 04:01:18.297471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.604 [2024-12-09 04:01:18.297524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.604 [2024-12-09 04:01:18.314379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.604 [2024-12-09 04:01:18.314431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.604 [2024-12-09 04:01:18.329758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.604 [2024-12-09 04:01:18.329811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.604 [2024-12-09 04:01:18.338970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.604 [2024-12-09 04:01:18.339021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.604 [2024-12-09 04:01:18.353643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.604 [2024-12-09 04:01:18.353682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.604 [2024-12-09 04:01:18.370567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.604 [2024-12-09 04:01:18.370636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.604 [2024-12-09 04:01:18.380377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.380426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.390629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.390679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.402456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.402508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.411460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.411512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.425668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.425765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.434877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.434929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.449612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.449681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.466113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.466153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.475728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.475779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.489498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.489534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.498660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.498711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.512955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.513006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.530109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.530163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.540397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.540435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.605 [2024-12-09 04:01:18.551378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.605 [2024-12-09 04:01:18.551431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.566539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.566601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.581087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.581141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 12323.00 IOPS, 96.27 MiB/s [2024-12-09T04:01:18.812Z] [2024-12-09 04:01:18.596508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.596541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.605849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.605901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.617603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.617652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.628475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.628513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.643567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.643635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.662275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.662329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.677099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.862 [2024-12-09 04:01:18.677153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.862 [2024-12-09 04:01:18.687055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.687108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.863 [2024-12-09 04:01:18.699687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.699741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.863 [2024-12-09 04:01:18.716246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.716328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.863 [2024-12-09 04:01:18.733498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.733552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.863 [2024-12-09 04:01:18.750205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.750291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.863 [2024-12-09 04:01:18.760316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.760353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.863 [2024-12-09 04:01:18.774823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.774876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.863 [2024-12-09 04:01:18.785316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.785407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.863 [2024-12-09 04:01:18.800447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.863 [2024-12-09 04:01:18.800486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.120 [2024-12-09 04:01:18.816270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.120 [2024-12-09 04:01:18.816322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.120 [2024-12-09 04:01:18.826049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.120 [2024-12-09 04:01:18.826099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.120 [2024-12-09 04:01:18.841775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.120 [2024-12-09 04:01:18.841825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.858141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.858204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.875952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.876002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.890050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.890104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.907028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.907096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.916842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.916895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.931670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.931722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.943747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.943799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.960013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.960064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.969932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.969983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:18.984276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:18.984339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:19.000642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:19.000683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:19.010156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:19.010233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:19.024348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:19.024400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:19.040846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:19.040898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:19.057446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:19.057498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.121 [2024-12-09 04:01:19.067020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.121 [2024-12-09 04:01:19.067071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.081789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.081857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.090962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.091016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.105102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.105155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.123313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.123366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.134499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.134582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.147393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.147447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.165232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.165286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.180109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.180161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.189167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.189245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.204205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.204290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.213703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.213771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.226929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.226981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.243619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.243672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.260258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.260311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.270084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.270135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.284547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.284614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.303368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.303438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.379 [2024-12-09 04:01:19.317926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.379 [2024-12-09 04:01:19.317978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.334444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.334496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.344219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.344285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.354846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.354898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.367286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.367338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.376912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.376964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.387547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.387599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.398096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.398148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.410469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.410524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.419397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.419449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.435452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.435504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.444931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.444984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.461562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.461600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.470855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.470906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.483297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.483348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.492929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.492980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.507909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.507963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.525836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.525889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.540725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.540778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.559160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.559241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.638 [2024-12-09 04:01:19.573583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.638 [2024-12-09 04:01:19.573636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.896 [2024-12-09 04:01:19.587436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.587503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 12244.00 IOPS, 95.66 MiB/s [2024-12-09T04:01:19.847Z] [2024-12-09 04:01:19.604586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.604623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.615223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.615252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.629728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.629778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.639833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.639882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.655314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.655351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.671474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.671535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.680996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.681034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.694155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.694211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.704998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.705046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.721865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.721932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.740484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.740535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.754770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.754821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.770444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.770495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.779537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.779588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.795118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.795196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.806675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.806726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.822312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.822364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.897 [2024-12-09 04:01:19.840149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.897 [2024-12-09 04:01:19.840210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.849836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.849887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.864107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.864158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.873863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.873929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.886839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.886891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.903321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.903375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.920005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.920057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.929544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.929598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.943953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.944004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.953538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.953583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.964666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.964716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.978840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.978892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:19.988340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:19.988391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:20.004043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:20.004082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:20.019524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:20.019566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:20.028720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:20.028761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:20.045300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:20.045339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:20.054868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:20.054943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:20.068086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:20.068157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:20.078718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:20.078770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.154 [2024-12-09 04:01:20.092872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.154 [2024-12-09 04:01:20.092925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.412 [2024-12-09 04:01:20.103497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.412 [2024-12-09 04:01:20.103572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.412 [2024-12-09 04:01:20.118432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.412 [2024-12-09 04:01:20.118484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.134946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.135000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.144801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.144851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.156016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.156070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.166388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.166439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.176968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.177021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.189347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.189408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.199128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.199191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.212222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.212271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.228518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.228587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.246991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.247044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.260892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.260958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.270090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.270141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.281090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.281143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.291952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.292002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.307125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.307204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.323980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.324033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.341176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.341263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.413 [2024-12-09 04:01:20.357291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.413 [2024-12-09 04:01:20.357343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.374679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.374732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.384588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.384642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.398909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.398961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.408303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.408355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.422707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.422759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.432573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.432626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.447148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.447220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.456408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.456461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.470898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.470951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.480290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.480341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.496699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.496751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.513166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.513251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.522938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.522990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.537726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.537778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.556183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.556247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.566312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.566380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.575795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.575855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 12193.00 IOPS, 95.26 MiB/s [2024-12-09T04:01:20.623Z] [2024-12-09 04:01:20.590790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.590842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.673 [2024-12-09 04:01:20.608331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.673 [2024-12-09 04:01:20.608367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.623508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.623578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.633595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.633668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.645408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.645444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.660829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.660881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.671295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.671335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.682419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.682452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.698779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.698832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.708500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.708570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.724852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.724895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.743766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.743817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.754105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.754160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.768878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.768931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.785763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.785815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.795007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.795059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.810054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.810122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.826440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.826493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.835955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.836006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.850242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.850307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.932 [2024-12-09 04:01:20.867329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.932 [2024-12-09 04:01:20.867367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.883193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.883228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.893169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.893265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.907211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.907247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.915323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.915358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.927596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.927646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.939105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.939158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.955766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.955817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.965427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.965481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.979786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.979838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:20.989461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:20.989514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.002117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.002201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.017414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.017453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.033975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.034027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.043832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.043885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.058548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.058609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.068151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.068229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.084605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.084657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.100721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.100773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.109414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.109451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.190 [2024-12-09 04:01:21.126923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.190 [2024-12-09 04:01:21.126973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.448 [2024-12-09 04:01:21.142228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.448 [2024-12-09 04:01:21.142308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.448 [2024-12-09 04:01:21.151494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.448 [2024-12-09 04:01:21.151545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.448 [2024-12-09 04:01:21.162642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.448 [2024-12-09 04:01:21.162709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.448 [2024-12-09 04:01:21.172694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.448 [2024-12-09 04:01:21.172746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.448 [2024-12-09 04:01:21.187017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.448 [2024-12-09 04:01:21.187086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.448 [2024-12-09 04:01:21.196855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.448 [2024-12-09 04:01:21.196909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.448 [2024-12-09 04:01:21.211565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.448 [2024-12-09 04:01:21.211617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.448 [2024-12-09 04:01:21.220523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.448 [2024-12-09 04:01:21.220559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.235122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.235159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.244893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.244962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.260287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.260337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.269238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.269290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.283142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.283207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.292914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.292965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.307142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.307205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.317035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.317088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.331085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.331138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.340410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.340461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.354136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.354231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.363846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.363898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.374856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.374909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.449 [2024-12-09 04:01:21.385005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.449 [2024-12-09 04:01:21.385063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.398710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.398763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.407655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.407707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.421583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.421636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.431506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.431560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.444829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.444882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.454267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.454319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.464952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.465005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.479383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.479436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.488419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.488472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.503328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.503366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.519851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.519901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.537393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.537432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.707 [2024-12-09 04:01:21.553453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.707 [2024-12-09 04:01:21.553491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.708 [2024-12-09 04:01:21.570684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.708 [2024-12-09 04:01:21.570736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.708 12231.00 IOPS, 95.55 MiB/s [2024-12-09T04:01:21.658Z] [2024-12-09 04:01:21.588794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.708 [2024-12-09 04:01:21.588825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.708 [2024-12-09 04:01:21.598586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.708 [2024-12-09 04:01:21.598638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.708 [2024-12-09 04:01:21.609096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.708 [2024-12-09 04:01:21.609147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.708 [2024-12-09 04:01:21.625614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.708 [2024-12-09 04:01:21.625668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.708 [2024-12-09 04:01:21.641985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.708 [2024-12-09 04:01:21.642037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.708 [2024-12-09 04:01:21.651102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.708 [2024-12-09 04:01:21.651155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.666957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.667008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.682215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.682259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.692473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.692509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.703855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.703912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.721251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.721300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.737345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.737422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.755970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.756030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.769923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.769974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.778953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.779005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.794075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.794112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.811603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.811657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.821003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.821055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.834825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.834890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.843984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.844037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.858708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.858762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.868349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.868402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.883513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.883572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.966 [2024-12-09 04:01:21.901747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.966 [2024-12-09 04:01:21.901801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:21.917619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:21.917658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:21.935315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:21.935368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:21.951147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:21.951211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:21.962668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:21.962723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:21.978732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:21.978784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:21.994981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:21.995032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.006099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.006151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.022805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.022857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.038676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.038727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.048697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.048750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.062878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.062940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.071680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.071732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.086662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.086713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.105384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.105438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.119823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.119886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.131506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.131558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.148733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.148786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.162786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.162838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.225 [2024-12-09 04:01:22.171421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.225 [2024-12-09 04:01:22.171474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.186388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.186439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.195375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.195453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.209036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.209090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.218336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.218387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.232148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.232208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.240611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.240662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.256332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.256408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.274672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.274750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.288491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.288578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.304722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.304798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.322977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.323055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.333252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.333311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.348265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.348354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.364829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.364918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.382162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.382272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.484 [2024-12-09 04:01:22.397034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.484 [2024-12-09 04:01:22.397108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.485 [2024-12-09 04:01:22.406576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.485 [2024-12-09 04:01:22.406643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.485 [2024-12-09 04:01:22.420553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.485 [2024-12-09 04:01:22.420631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.485 [2024-12-09 04:01:22.430343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.485 [2024-12-09 04:01:22.430405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.445213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.445292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.456795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.456859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.471825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.471896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.488098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.488196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.503449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.503519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.520471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.520590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.536589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.536664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.553074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.553147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.562244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.562315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.576846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.576914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 12296.40 IOPS, 96.07 MiB/s [2024-12-09T04:01:22.694Z] [2024-12-09 04:01:22.591734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.591801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 00:11:40.744 Latency(us) 00:11:40.744 [2024-12-09T04:01:22.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.744 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:40.744 Nvme1n1 : 5.01 12294.47 96.05 0.00 0.00 10398.36 4468.36 17635.14 00:11:40.744 [2024-12-09T04:01:22.694Z] =================================================================================================================== 00:11:40.744 [2024-12-09T04:01:22.694Z] Total : 12294.47 96.05 0.00 0.00 10398.36 4468.36 17635.14 00:11:40.744 [2024-12-09 04:01:22.601711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.601769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.609691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.609746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.617694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.617747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.629764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.629839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.641755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.641827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.653758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.653827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.665727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.665791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.677752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.744 [2024-12-09 04:01:22.677827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.744 [2024-12-09 04:01:22.689769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.745 [2024-12-09 04:01:22.689821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.701806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.701867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.713802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.713884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.725818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.725892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.737815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.737902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.749830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.749902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.761795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.761879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.773814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.773901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.785804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.785885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.797836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.797934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.809851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.809916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.821825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.821885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.833836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.833906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.845809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.845879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.857829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.857896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.865848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.865914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.877857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.877910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.889850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.889914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 [2024-12-09 04:01:22.901877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.003 [2024-12-09 04:01:22.901915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.003 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65889) - No such process 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65889 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.003 delay0 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.003 04:01:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:41.261 [2024-12-09 04:01:23.102338] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:47.820 Initializing NVMe Controllers 00:11:47.820 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.820 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:47.820 Initialization complete. Launching workers. 00:11:47.820 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 83 00:11:47.820 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 370, failed to submit 33 00:11:47.820 success 234, unsuccessful 136, failed 0 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.820 rmmod nvme_tcp 00:11:47.820 rmmod nvme_fabrics 00:11:47.820 rmmod nvme_keyring 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65732 ']' 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65732 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65732 ']' 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65732 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65732 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65732' 00:11:47.820 killing process with pid 65732 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65732 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65732 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:47.820 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:48.079 00:11:48.079 real 0m25.374s 00:11:48.079 user 0m41.238s 00:11:48.079 sys 0m6.960s 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.079 ************************************ 00:11:48.079 END TEST nvmf_zcopy 00:11:48.079 ************************************ 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.079 ************************************ 00:11:48.079 START TEST nvmf_nmic 00:11:48.079 ************************************ 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:48.079 * Looking for test storage... 00:11:48.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.079 04:01:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.340 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.341 --rc genhtml_branch_coverage=1 00:11:48.341 --rc genhtml_function_coverage=1 00:11:48.341 --rc genhtml_legend=1 00:11:48.341 --rc geninfo_all_blocks=1 00:11:48.341 --rc geninfo_unexecuted_blocks=1 00:11:48.341 00:11:48.341 ' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.341 --rc genhtml_branch_coverage=1 00:11:48.341 --rc genhtml_function_coverage=1 00:11:48.341 --rc genhtml_legend=1 00:11:48.341 --rc geninfo_all_blocks=1 00:11:48.341 --rc geninfo_unexecuted_blocks=1 00:11:48.341 00:11:48.341 ' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.341 --rc genhtml_branch_coverage=1 00:11:48.341 --rc genhtml_function_coverage=1 00:11:48.341 --rc genhtml_legend=1 00:11:48.341 --rc geninfo_all_blocks=1 00:11:48.341 --rc geninfo_unexecuted_blocks=1 00:11:48.341 00:11:48.341 ' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.341 --rc genhtml_branch_coverage=1 00:11:48.341 --rc genhtml_function_coverage=1 00:11:48.341 --rc genhtml_legend=1 00:11:48.341 --rc geninfo_all_blocks=1 00:11:48.341 --rc geninfo_unexecuted_blocks=1 00:11:48.341 00:11:48.341 ' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.341 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:48.341 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:48.342 Cannot find device "nvmf_init_br" 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:48.342 Cannot find device "nvmf_init_br2" 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:48.342 Cannot find device "nvmf_tgt_br" 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:48.342 Cannot find device "nvmf_tgt_br2" 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:48.342 Cannot find device "nvmf_init_br" 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:48.342 Cannot find device "nvmf_init_br2" 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:48.342 Cannot find device "nvmf_tgt_br" 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:48.342 Cannot find device "nvmf_tgt_br2" 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:48.342 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:48.618 Cannot find device "nvmf_br" 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:48.618 Cannot find device "nvmf_init_if" 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:48.618 Cannot find device "nvmf_init_if2" 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:48.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:48.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:48.618 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:48.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:48.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:11:48.619 00:11:48.619 --- 10.0.0.3 ping statistics --- 00:11:48.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.619 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:48.619 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:48.619 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:11:48.619 00:11:48.619 --- 10.0.0.4 ping statistics --- 00:11:48.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.619 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:48.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:48.619 00:11:48.619 --- 10.0.0.1 ping statistics --- 00:11:48.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.619 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:48.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:11:48.619 00:11:48.619 --- 10.0.0.2 ping statistics --- 00:11:48.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.619 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:48.619 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66274 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66274 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66274 ']' 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.877 04:01:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:48.877 [2024-12-09 04:01:30.650874] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:11:48.877 [2024-12-09 04:01:30.650998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.877 [2024-12-09 04:01:30.808880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.134 [2024-12-09 04:01:30.890489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.134 [2024-12-09 04:01:30.890614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.134 [2024-12-09 04:01:30.890638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.134 [2024-12-09 04:01:30.890655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.134 [2024-12-09 04:01:30.890669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.134 [2024-12-09 04:01:30.892644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.134 [2024-12-09 04:01:30.892786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.134 [2024-12-09 04:01:30.892962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.134 [2024-12-09 04:01:30.892974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.134 [2024-12-09 04:01:30.977492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.721 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.721 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:49.721 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.721 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.721 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 [2024-12-09 04:01:31.680219] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 Malloc0 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 [2024-12-09 04:01:31.754198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.980 test case1: single bdev can't be used in multiple subsystems 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.980 [2024-12-09 04:01:31.777952] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:49.980 [2024-12-09 04:01:31.777997] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:49.980 [2024-12-09 04:01:31.778009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.980 request: 00:11:49.980 { 00:11:49.980 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:49.980 "namespace": { 00:11:49.980 "bdev_name": "Malloc0", 00:11:49.980 "no_auto_visible": false, 00:11:49.980 "hide_metadata": false 00:11:49.980 }, 00:11:49.980 "method": "nvmf_subsystem_add_ns", 00:11:49.980 "req_id": 1 00:11:49.980 } 00:11:49.980 Got JSON-RPC error response 00:11:49.980 response: 00:11:49.980 { 00:11:49.980 "code": -32602, 00:11:49.980 "message": "Invalid parameters" 00:11:49.980 } 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:49.980 Adding namespace failed - expected result. 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:49.980 test case2: host connect to nvmf target in multiple paths 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:49.980 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:49.981 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.981 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.981 [2024-12-09 04:01:31.794106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:49.981 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.981 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:50.237 04:01:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:50.237 04:01:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.237 04:01:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.237 04:01:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.237 04:01:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.237 04:01:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.149 04:01:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.149 04:01:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.149 04:01:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.149 04:01:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.149 04:01:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.149 04:01:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:52.149 04:01:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:52.408 [global] 00:11:52.408 thread=1 00:11:52.408 invalidate=1 00:11:52.408 rw=write 00:11:52.408 time_based=1 00:11:52.408 runtime=1 00:11:52.408 ioengine=libaio 00:11:52.408 direct=1 00:11:52.408 bs=4096 00:11:52.408 iodepth=1 00:11:52.408 norandommap=0 00:11:52.408 numjobs=1 00:11:52.408 00:11:52.408 verify_dump=1 00:11:52.408 verify_backlog=512 00:11:52.408 verify_state_save=0 00:11:52.408 do_verify=1 00:11:52.408 verify=crc32c-intel 00:11:52.408 [job0] 00:11:52.408 filename=/dev/nvme0n1 00:11:52.408 Could not set queue depth (nvme0n1) 00:11:52.408 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:52.408 fio-3.35 00:11:52.408 Starting 1 thread 00:11:53.859 00:11:53.859 job0: (groupid=0, jobs=1): err= 0: pid=66366: Mon Dec 9 04:01:35 2024 00:11:53.859 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:53.859 slat (usec): min=11, max=144, avg=19.83, stdev= 7.42 00:11:53.859 clat (usec): min=144, max=7173, avg=198.20, stdev=235.60 00:11:53.859 lat (usec): min=158, max=7188, avg=218.03, stdev=236.00 00:11:53.859 clat percentiles (usec): 00:11:53.859 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:11:53.859 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:11:53.859 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 227], 00:11:53.859 | 99.00th=[ 253], 99.50th=[ 302], 99.90th=[ 5014], 99.95th=[ 5342], 00:11:53.859 | 99.99th=[ 7177] 00:11:53.859 write: IOPS=2817, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:11:53.859 slat (usec): min=17, max=172, avg=32.52, stdev=11.01 00:11:53.859 clat (usec): min=87, max=562, avg=119.85, stdev=22.05 00:11:53.859 lat (usec): min=109, max=593, avg=152.37, stdev=27.67 00:11:53.859 clat percentiles (usec): 00:11:53.859 | 1.00th=[ 93], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 103], 00:11:53.859 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 121], 00:11:53.859 | 70.00th=[ 127], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 155], 00:11:53.859 | 99.00th=[ 180], 99.50th=[ 202], 99.90th=[ 371], 99.95th=[ 388], 00:11:53.859 | 99.99th=[ 562] 00:11:53.859 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:11:53.859 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:11:53.859 lat (usec) : 100=5.84%, 250=93.46%, 500=0.52%, 750=0.02% 00:11:53.859 lat (msec) : 2=0.04%, 4=0.07%, 10=0.06% 00:11:53.859 cpu : usr=2.70%, sys=11.10%, ctx=5384, majf=0, minf=5 00:11:53.859 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.859 issued rwts: total=2560,2820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.859 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:53.859 00:11:53.859 Run status group 0 (all jobs): 00:11:53.859 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:53.859 WRITE: bw=11.0MiB/s (11.5MB/s), 11.0MiB/s-11.0MiB/s (11.5MB/s-11.5MB/s), io=11.0MiB (11.6MB), run=1001-1001msec 00:11:53.859 00:11:53.859 Disk stats (read/write): 00:11:53.859 nvme0n1: ios=2264/2560, merge=0/0, ticks=471/351, in_queue=822, util=90.89% 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.859 rmmod nvme_tcp 00:11:53.859 rmmod nvme_fabrics 00:11:53.859 rmmod nvme_keyring 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66274 ']' 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66274 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66274 ']' 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66274 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66274 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.859 killing process with pid 66274 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66274' 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66274 00:11:53.859 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66274 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:54.118 04:01:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.118 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:54.118 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:54.118 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:54.118 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:54.118 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:54.376 00:11:54.376 real 0m6.283s 00:11:54.376 user 0m18.919s 00:11:54.376 sys 0m2.475s 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.376 ************************************ 00:11:54.376 END TEST nvmf_nmic 00:11:54.376 ************************************ 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:54.376 ************************************ 00:11:54.376 START TEST nvmf_fio_target 00:11:54.376 ************************************ 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:54.376 * Looking for test storage... 00:11:54.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:54.376 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:54.636 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:54.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.637 --rc genhtml_branch_coverage=1 00:11:54.637 --rc genhtml_function_coverage=1 00:11:54.637 --rc genhtml_legend=1 00:11:54.637 --rc geninfo_all_blocks=1 00:11:54.637 --rc geninfo_unexecuted_blocks=1 00:11:54.637 00:11:54.637 ' 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:54.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.637 --rc genhtml_branch_coverage=1 00:11:54.637 --rc genhtml_function_coverage=1 00:11:54.637 --rc genhtml_legend=1 00:11:54.637 --rc geninfo_all_blocks=1 00:11:54.637 --rc geninfo_unexecuted_blocks=1 00:11:54.637 00:11:54.637 ' 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:54.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.637 --rc genhtml_branch_coverage=1 00:11:54.637 --rc genhtml_function_coverage=1 00:11:54.637 --rc genhtml_legend=1 00:11:54.637 --rc geninfo_all_blocks=1 00:11:54.637 --rc geninfo_unexecuted_blocks=1 00:11:54.637 00:11:54.637 ' 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:54.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.637 --rc genhtml_branch_coverage=1 00:11:54.637 --rc genhtml_function_coverage=1 00:11:54.637 --rc genhtml_legend=1 00:11:54.637 --rc geninfo_all_blocks=1 00:11:54.637 --rc geninfo_unexecuted_blocks=1 00:11:54.637 00:11:54.637 ' 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.637 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.638 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:54.638 Cannot find device "nvmf_init_br" 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:54.638 Cannot find device "nvmf_init_br2" 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:54.638 Cannot find device "nvmf_tgt_br" 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.638 Cannot find device "nvmf_tgt_br2" 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:54.638 Cannot find device "nvmf_init_br" 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:54.638 Cannot find device "nvmf_init_br2" 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:54.638 Cannot find device "nvmf_tgt_br" 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:54.638 Cannot find device "nvmf_tgt_br2" 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:54.638 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:54.898 Cannot find device "nvmf_br" 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:54.898 Cannot find device "nvmf_init_if" 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:54.898 Cannot find device "nvmf_init_if2" 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:54.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:54.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:11:54.898 00:11:54.898 --- 10.0.0.3 ping statistics --- 00:11:54.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.898 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:54.898 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:54.898 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:11:54.898 00:11:54.898 --- 10.0.0.4 ping statistics --- 00:11:54.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.898 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:54.898 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:11:55.158 00:11:55.158 --- 10.0.0.1 ping statistics --- 00:11:55.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.158 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:55.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:55.158 00:11:55.158 --- 10.0.0.2 ping statistics --- 00:11:55.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.158 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66596 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.158 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66596 00:11:55.159 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66596 ']' 00:11:55.159 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.159 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.159 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.159 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.159 04:01:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.159 [2024-12-09 04:01:36.951849] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:11:55.159 [2024-12-09 04:01:36.951955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.159 [2024-12-09 04:01:37.100620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.418 [2024-12-09 04:01:37.185464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.418 [2024-12-09 04:01:37.185528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.418 [2024-12-09 04:01:37.185540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.418 [2024-12-09 04:01:37.185549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.418 [2024-12-09 04:01:37.185557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.418 [2024-12-09 04:01:37.187145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.418 [2024-12-09 04:01:37.187293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.418 [2024-12-09 04:01:37.187357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.418 [2024-12-09 04:01:37.187358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.418 [2024-12-09 04:01:37.265938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.418 04:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.418 04:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:55.418 04:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.418 04:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.418 04:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.675 04:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.675 04:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:55.933 [2024-12-09 04:01:37.688952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.933 04:01:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.192 04:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:56.192 04:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.758 04:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:56.758 04:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.064 04:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:57.064 04:01:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.321 04:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:57.321 04:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:57.579 04:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.836 04:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:57.836 04:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.094 04:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:58.094 04:01:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.351 04:01:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:58.351 04:01:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:58.609 04:01:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.866 04:01:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:58.866 04:01:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.430 04:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:59.430 04:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.688 04:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:59.947 [2024-12-09 04:01:41.736851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:59.947 04:01:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:00.205 04:01:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:00.466 04:01:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:00.725 04:01:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:00.725 04:01:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.725 04:01:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.725 04:01:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:00.725 04:01:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:00.725 04:01:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.625 04:01:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.625 04:01:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.625 04:01:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.625 04:01:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:02.625 04:01:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.625 04:01:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:02.625 04:01:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:02.625 [global] 00:12:02.625 thread=1 00:12:02.625 invalidate=1 00:12:02.625 rw=write 00:12:02.625 time_based=1 00:12:02.625 runtime=1 00:12:02.625 ioengine=libaio 00:12:02.625 direct=1 00:12:02.625 bs=4096 00:12:02.625 iodepth=1 00:12:02.625 norandommap=0 00:12:02.625 numjobs=1 00:12:02.625 00:12:02.625 verify_dump=1 00:12:02.625 verify_backlog=512 00:12:02.625 verify_state_save=0 00:12:02.625 do_verify=1 00:12:02.625 verify=crc32c-intel 00:12:02.625 [job0] 00:12:02.625 filename=/dev/nvme0n1 00:12:02.625 [job1] 00:12:02.625 filename=/dev/nvme0n2 00:12:02.625 [job2] 00:12:02.625 filename=/dev/nvme0n3 00:12:02.625 [job3] 00:12:02.625 filename=/dev/nvme0n4 00:12:02.625 Could not set queue depth (nvme0n1) 00:12:02.625 Could not set queue depth (nvme0n2) 00:12:02.625 Could not set queue depth (nvme0n3) 00:12:02.625 Could not set queue depth (nvme0n4) 00:12:02.884 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.884 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.884 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.884 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.884 fio-3.35 00:12:02.884 Starting 4 threads 00:12:04.258 00:12:04.258 job0: (groupid=0, jobs=1): err= 0: pid=66784: Mon Dec 9 04:01:45 2024 00:12:04.258 read: IOPS=1898, BW=7592KiB/s (7775kB/s)(7600KiB/1001msec) 00:12:04.258 slat (nsec): min=11579, max=40736, avg=14468.75, stdev=2681.72 00:12:04.258 clat (usec): min=144, max=559, avg=272.28, stdev=37.95 00:12:04.258 lat (usec): min=157, max=577, avg=286.75, stdev=38.47 00:12:04.258 clat percentiles (usec): 00:12:04.258 | 1.00th=[ 182], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:12:04.258 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:12:04.258 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 330], 95.00th=[ 355], 00:12:04.258 | 99.00th=[ 375], 99.50th=[ 408], 99.90th=[ 553], 99.95th=[ 562], 00:12:04.258 | 99.99th=[ 562] 00:12:04.258 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:04.258 slat (nsec): min=17928, max=85610, avg=24149.19, stdev=7023.32 00:12:04.258 clat (usec): min=102, max=387, avg=194.47, stdev=50.85 00:12:04.258 lat (usec): min=129, max=420, avg=218.62, stdev=54.88 00:12:04.258 clat percentiles (usec): 00:12:04.258 | 1.00th=[ 116], 5.00th=[ 125], 10.00th=[ 133], 20.00th=[ 163], 00:12:04.258 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:12:04.258 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 281], 95.00th=[ 310], 00:12:04.258 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 379], 99.95th=[ 388], 00:12:04.258 | 99.99th=[ 388] 00:12:04.258 bw ( KiB/s): min= 8192, max= 8192, per=20.16%, avg=8192.00, stdev= 0.00, samples=1 00:12:04.258 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:04.258 lat (usec) : 250=56.33%, 500=43.59%, 750=0.08% 00:12:04.258 cpu : usr=2.10%, sys=5.70%, ctx=3948, majf=0, minf=5 00:12:04.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.258 issued rwts: total=1900,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.258 job1: (groupid=0, jobs=1): err= 0: pid=66785: Mon Dec 9 04:01:45 2024 00:12:04.258 read: IOPS=1970, BW=7880KiB/s (8069kB/s)(7888KiB/1001msec) 00:12:04.258 slat (usec): min=12, max=404, avg=15.66, stdev= 9.66 00:12:04.258 clat (usec): min=188, max=2465, avg=283.61, stdev=84.69 00:12:04.258 lat (usec): min=201, max=2491, avg=299.27, stdev=86.69 00:12:04.258 clat percentiles (usec): 00:12:04.258 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:12:04.258 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:12:04.258 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 347], 95.00th=[ 441], 00:12:04.258 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 1942], 99.95th=[ 2474], 00:12:04.258 | 99.99th=[ 2474] 00:12:04.258 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:04.258 slat (nsec): min=17564, max=74273, avg=20942.79, stdev=3721.16 00:12:04.258 clat (usec): min=96, max=274, avg=175.81, stdev=33.34 00:12:04.258 lat (usec): min=115, max=349, avg=196.75, stdev=34.18 00:12:04.258 clat percentiles (usec): 00:12:04.258 | 1.00th=[ 101], 5.00th=[ 111], 10.00th=[ 118], 20.00th=[ 135], 00:12:04.258 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:12:04.258 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 215], 00:12:04.258 | 99.00th=[ 227], 99.50th=[ 239], 99.90th=[ 265], 99.95th=[ 265], 00:12:04.258 | 99.99th=[ 277] 00:12:04.258 bw ( KiB/s): min= 8192, max= 8192, per=20.16%, avg=8192.00, stdev= 0.00, samples=1 00:12:04.258 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:04.258 lat (usec) : 100=0.37%, 250=61.99%, 500=37.04%, 750=0.55% 00:12:04.258 lat (msec) : 2=0.02%, 4=0.02% 00:12:04.258 cpu : usr=1.40%, sys=5.90%, ctx=4020, majf=0, minf=7 00:12:04.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.258 issued rwts: total=1972,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.258 job2: (groupid=0, jobs=1): err= 0: pid=66790: Mon Dec 9 04:01:45 2024 00:12:04.258 read: IOPS=2645, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:12:04.258 slat (nsec): min=11871, max=61593, avg=15553.34, stdev=5017.20 00:12:04.258 clat (usec): min=143, max=3291, avg=175.44, stdev=61.84 00:12:04.258 lat (usec): min=157, max=3314, avg=190.99, stdev=62.45 00:12:04.258 clat percentiles (usec): 00:12:04.258 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 163], 00:12:04.258 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:12:04.258 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:12:04.258 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 241], 99.95th=[ 281], 00:12:04.258 | 99.99th=[ 3294] 00:12:04.258 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:04.258 slat (usec): min=13, max=102, avg=23.34, stdev= 8.42 00:12:04.258 clat (usec): min=98, max=7137, avg=134.26, stdev=129.35 00:12:04.258 lat (usec): min=121, max=7156, avg=157.59, stdev=129.69 00:12:04.258 clat percentiles (usec): 00:12:04.258 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 122], 00:12:04.258 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:12:04.258 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 151], 00:12:04.258 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 709], 99.95th=[ 1106], 00:12:04.258 | 99.99th=[ 7111] 00:12:04.258 bw ( KiB/s): min=12288, max=12288, per=30.25%, avg=12288.00, stdev= 0.00, samples=1 00:12:04.258 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:04.258 lat (usec) : 100=0.02%, 250=99.86%, 500=0.03%, 750=0.02%, 1000=0.02% 00:12:04.258 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:12:04.258 cpu : usr=2.70%, sys=8.50%, ctx=5724, majf=0, minf=5 00:12:04.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.258 issued rwts: total=2648,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.258 job3: (groupid=0, jobs=1): err= 0: pid=66791: Mon Dec 9 04:01:45 2024 00:12:04.258 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:04.258 slat (nsec): min=11689, max=90364, avg=16830.72, stdev=4243.72 00:12:04.258 clat (usec): min=149, max=891, avg=180.88, stdev=23.10 00:12:04.258 lat (usec): min=162, max=907, avg=197.72, stdev=24.08 00:12:04.258 clat percentiles (usec): 00:12:04.258 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:12:04.258 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:12:04.258 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 196], 95.00th=[ 202], 00:12:04.258 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 371], 99.95th=[ 791], 00:12:04.258 | 99.99th=[ 889] 00:12:04.258 write: IOPS=2996, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:12:04.258 slat (nsec): min=14376, max=81902, avg=26547.76, stdev=7153.12 00:12:04.258 clat (usec): min=101, max=524, avg=134.52, stdev=20.54 00:12:04.258 lat (usec): min=120, max=546, avg=161.07, stdev=22.59 00:12:04.258 clat percentiles (usec): 00:12:04.258 | 1.00th=[ 109], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 124], 00:12:04.258 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:12:04.258 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 155], 00:12:04.258 | 99.00th=[ 172], 99.50th=[ 227], 99.90th=[ 429], 99.95th=[ 494], 00:12:04.258 | 99.99th=[ 529] 00:12:04.258 bw ( KiB/s): min=12288, max=12288, per=30.25%, avg=12288.00, stdev= 0.00, samples=1 00:12:04.258 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:04.258 lat (usec) : 250=99.68%, 500=0.27%, 750=0.02%, 1000=0.04% 00:12:04.258 cpu : usr=2.00%, sys=9.90%, ctx=5569, majf=0, minf=19 00:12:04.258 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.258 issued rwts: total=2560,2999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.258 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.258 00:12:04.258 Run status group 0 (all jobs): 00:12:04.258 READ: bw=35.4MiB/s (37.2MB/s), 7592KiB/s-10.3MiB/s (7775kB/s-10.8MB/s), io=35.5MiB (37.2MB), run=1001-1001msec 00:12:04.258 WRITE: bw=39.7MiB/s (41.6MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.7MiB (41.6MB), run=1001-1001msec 00:12:04.258 00:12:04.258 Disk stats (read/write): 00:12:04.258 nvme0n1: ios=1586/1816, merge=0/0, ticks=435/369, in_queue=804, util=87.27% 00:12:04.259 nvme0n2: ios=1566/1983, merge=0/0, ticks=453/359, in_queue=812, util=87.72% 00:12:04.259 nvme0n3: ios=2354/2560, merge=0/0, ticks=417/351, in_queue=768, util=89.20% 00:12:04.259 nvme0n4: ios=2157/2560, merge=0/0, ticks=400/372, in_queue=772, util=89.67% 00:12:04.259 04:01:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:04.259 [global] 00:12:04.259 thread=1 00:12:04.259 invalidate=1 00:12:04.259 rw=randwrite 00:12:04.259 time_based=1 00:12:04.259 runtime=1 00:12:04.259 ioengine=libaio 00:12:04.259 direct=1 00:12:04.259 bs=4096 00:12:04.259 iodepth=1 00:12:04.259 norandommap=0 00:12:04.259 numjobs=1 00:12:04.259 00:12:04.259 verify_dump=1 00:12:04.259 verify_backlog=512 00:12:04.259 verify_state_save=0 00:12:04.259 do_verify=1 00:12:04.259 verify=crc32c-intel 00:12:04.259 [job0] 00:12:04.259 filename=/dev/nvme0n1 00:12:04.259 [job1] 00:12:04.259 filename=/dev/nvme0n2 00:12:04.259 [job2] 00:12:04.259 filename=/dev/nvme0n3 00:12:04.259 [job3] 00:12:04.259 filename=/dev/nvme0n4 00:12:04.259 Could not set queue depth (nvme0n1) 00:12:04.259 Could not set queue depth (nvme0n2) 00:12:04.259 Could not set queue depth (nvme0n3) 00:12:04.259 Could not set queue depth (nvme0n4) 00:12:04.259 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.259 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.259 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.259 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.259 fio-3.35 00:12:04.259 Starting 4 threads 00:12:05.697 00:12:05.697 job0: (groupid=0, jobs=1): err= 0: pid=66845: Mon Dec 9 04:01:47 2024 00:12:05.697 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:05.697 slat (nsec): min=11110, max=62569, avg=12581.66, stdev=2828.72 00:12:05.697 clat (usec): min=132, max=262, avg=160.33, stdev=12.26 00:12:05.697 lat (usec): min=143, max=275, avg=172.91, stdev=12.51 00:12:05.697 clat percentiles (usec): 00:12:05.697 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:12:05.697 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:12:05.697 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:12:05.697 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 219], 99.95th=[ 237], 00:12:05.697 | 99.99th=[ 265] 00:12:05.697 write: IOPS=3385, BW=13.2MiB/s (13.9MB/s)(13.2MiB/1001msec); 0 zone resets 00:12:05.697 slat (usec): min=13, max=305, avg=19.68, stdev= 7.05 00:12:05.697 clat (usec): min=58, max=378, avg=115.49, stdev=16.84 00:12:05.697 lat (usec): min=103, max=477, avg=135.17, stdev=19.17 00:12:05.697 clat percentiles (usec): 00:12:05.697 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 103], 00:12:05.697 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 118], 00:12:05.697 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 137], 95.00th=[ 143], 00:12:05.697 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 293], 99.95th=[ 338], 00:12:05.697 | 99.99th=[ 379] 00:12:05.697 bw ( KiB/s): min=13400, max=13400, per=40.87%, avg=13400.00, stdev= 0.00, samples=1 00:12:05.697 iops : min= 3350, max= 3350, avg=3350.00, stdev= 0.00, samples=1 00:12:05.697 lat (usec) : 100=7.60%, 250=92.31%, 500=0.09% 00:12:05.697 cpu : usr=2.30%, sys=8.50%, ctx=6464, majf=0, minf=11 00:12:05.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.697 issued rwts: total=3072,3389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.697 job1: (groupid=0, jobs=1): err= 0: pid=66846: Mon Dec 9 04:01:47 2024 00:12:05.697 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:05.697 slat (nsec): min=7970, max=54052, avg=12227.37, stdev=4702.23 00:12:05.697 clat (usec): min=165, max=462, avg=334.36, stdev=27.88 00:12:05.697 lat (usec): min=177, max=473, avg=346.58, stdev=28.89 00:12:05.697 clat percentiles (usec): 00:12:05.697 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:12:05.697 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:12:05.697 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 383], 00:12:05.697 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 433], 99.95th=[ 461], 00:12:05.697 | 99.99th=[ 461] 00:12:05.697 write: IOPS=1605, BW=6422KiB/s (6576kB/s)(6428KiB/1001msec); 0 zone resets 00:12:05.697 slat (nsec): min=11832, max=69298, avg=19061.89, stdev=5954.02 00:12:05.697 clat (usec): min=106, max=711, avg=268.75, stdev=38.44 00:12:05.697 lat (usec): min=147, max=726, avg=287.81, stdev=39.68 00:12:05.697 clat percentiles (usec): 00:12:05.697 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 243], 00:12:05.697 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:12:05.697 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 318], 00:12:05.697 | 99.00th=[ 433], 99.50th=[ 482], 99.90th=[ 668], 99.95th=[ 709], 00:12:05.697 | 99.99th=[ 709] 00:12:05.697 bw ( KiB/s): min= 8208, max= 8208, per=25.03%, avg=8208.00, stdev= 0.00, samples=1 00:12:05.697 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:12:05.697 lat (usec) : 250=15.18%, 500=84.66%, 750=0.16% 00:12:05.698 cpu : usr=1.00%, sys=4.40%, ctx=3146, majf=0, minf=11 00:12:05.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.698 issued rwts: total=1536,1607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.698 job2: (groupid=0, jobs=1): err= 0: pid=66847: Mon Dec 9 04:01:47 2024 00:12:05.698 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:05.698 slat (nsec): min=6196, max=53067, avg=14842.21, stdev=6047.08 00:12:05.698 clat (usec): min=267, max=413, avg=331.60, stdev=23.54 00:12:05.698 lat (usec): min=276, max=430, avg=346.44, stdev=24.15 00:12:05.698 clat percentiles (usec): 00:12:05.698 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 314], 00:12:05.698 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 338], 00:12:05.698 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 375], 00:12:05.698 | 99.00th=[ 396], 99.50th=[ 400], 99.90th=[ 412], 99.95th=[ 412], 00:12:05.698 | 99.99th=[ 412] 00:12:05.698 write: IOPS=1602, BW=6410KiB/s (6563kB/s)(6416KiB/1001msec); 0 zone resets 00:12:05.698 slat (usec): min=11, max=139, avg=19.76, stdev= 7.15 00:12:05.698 clat (usec): min=207, max=841, avg=268.49, stdev=38.46 00:12:05.698 lat (usec): min=221, max=855, avg=288.25, stdev=40.29 00:12:05.698 clat percentiles (usec): 00:12:05.698 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:12:05.698 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:12:05.698 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 318], 00:12:05.698 | 99.00th=[ 400], 99.50th=[ 465], 99.90th=[ 775], 99.95th=[ 840], 00:12:05.698 | 99.99th=[ 840] 00:12:05.698 bw ( KiB/s): min= 8192, max= 8192, per=24.99%, avg=8192.00, stdev= 0.00, samples=1 00:12:05.698 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:05.698 lat (usec) : 250=14.94%, 500=84.97%, 750=0.03%, 1000=0.06% 00:12:05.698 cpu : usr=2.30%, sys=3.90%, ctx=3151, majf=0, minf=13 00:12:05.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.698 issued rwts: total=1536,1604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.698 job3: (groupid=0, jobs=1): err= 0: pid=66848: Mon Dec 9 04:01:47 2024 00:12:05.698 read: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec) 00:12:05.698 slat (nsec): min=6287, max=59250, avg=12499.34, stdev=5371.57 00:12:05.698 clat (usec): min=255, max=459, avg=334.34, stdev=27.02 00:12:05.698 lat (usec): min=270, max=476, avg=346.83, stdev=28.23 00:12:05.698 clat percentiles (usec): 00:12:05.698 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:12:05.698 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:12:05.698 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 379], 00:12:05.698 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 433], 99.95th=[ 461], 00:12:05.698 | 99.99th=[ 461] 00:12:05.698 write: IOPS=1605, BW=6420KiB/s (6574kB/s)(6420KiB/1000msec); 0 zone resets 00:12:05.698 slat (usec): min=11, max=136, avg=17.02, stdev= 5.63 00:12:05.698 clat (usec): min=175, max=906, avg=271.23, stdev=42.08 00:12:05.698 lat (usec): min=214, max=921, avg=288.24, stdev=43.00 00:12:05.698 clat percentiles (usec): 00:12:05.698 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:12:05.698 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:12:05.698 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 326], 00:12:05.698 | 99.00th=[ 388], 99.50th=[ 461], 99.90th=[ 824], 99.95th=[ 906], 00:12:05.698 | 99.99th=[ 906] 00:12:05.698 bw ( KiB/s): min= 8192, max= 8192, per=24.99%, avg=8192.00, stdev= 0.00, samples=1 00:12:05.698 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:05.698 lat (usec) : 250=16.24%, 500=83.60%, 750=0.10%, 1000=0.06% 00:12:05.698 cpu : usr=1.00%, sys=4.10%, ctx=3156, majf=0, minf=9 00:12:05.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.698 issued rwts: total=1536,1605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.698 00:12:05.698 Run status group 0 (all jobs): 00:12:05.698 READ: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1000-1001msec 00:12:05.698 WRITE: bw=32.0MiB/s (33.6MB/s), 6410KiB/s-13.2MiB/s (6563kB/s-13.9MB/s), io=32.1MiB (33.6MB), run=1000-1001msec 00:12:05.698 00:12:05.698 Disk stats (read/write): 00:12:05.698 nvme0n1: ios=2626/3072, merge=0/0, ticks=484/375, in_queue=859, util=90.67% 00:12:05.698 nvme0n2: ios=1276/1536, merge=0/0, ticks=412/382, in_queue=794, util=89.49% 00:12:05.698 nvme0n3: ios=1244/1536, merge=0/0, ticks=408/382, in_queue=790, util=89.74% 00:12:05.698 nvme0n4: ios=1227/1536, merge=0/0, ticks=381/360, in_queue=741, util=89.90% 00:12:05.698 04:01:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:05.698 [global] 00:12:05.698 thread=1 00:12:05.698 invalidate=1 00:12:05.698 rw=write 00:12:05.698 time_based=1 00:12:05.698 runtime=1 00:12:05.698 ioengine=libaio 00:12:05.698 direct=1 00:12:05.698 bs=4096 00:12:05.698 iodepth=128 00:12:05.698 norandommap=0 00:12:05.698 numjobs=1 00:12:05.698 00:12:05.698 verify_dump=1 00:12:05.698 verify_backlog=512 00:12:05.698 verify_state_save=0 00:12:05.698 do_verify=1 00:12:05.698 verify=crc32c-intel 00:12:05.698 [job0] 00:12:05.698 filename=/dev/nvme0n1 00:12:05.698 [job1] 00:12:05.698 filename=/dev/nvme0n2 00:12:05.698 [job2] 00:12:05.698 filename=/dev/nvme0n3 00:12:05.698 [job3] 00:12:05.698 filename=/dev/nvme0n4 00:12:05.698 Could not set queue depth (nvme0n1) 00:12:05.698 Could not set queue depth (nvme0n2) 00:12:05.698 Could not set queue depth (nvme0n3) 00:12:05.698 Could not set queue depth (nvme0n4) 00:12:05.698 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.698 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.698 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.698 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:05.698 fio-3.35 00:12:05.698 Starting 4 threads 00:12:07.073 00:12:07.073 job0: (groupid=0, jobs=1): err= 0: pid=66909: Mon Dec 9 04:01:48 2024 00:12:07.073 read: IOPS=5173, BW=20.2MiB/s (21.2MB/s)(20.2MiB/1002msec) 00:12:07.073 slat (usec): min=7, max=4786, avg=91.12, stdev=384.69 00:12:07.073 clat (usec): min=717, max=17598, avg=12093.27, stdev=1415.80 00:12:07.073 lat (usec): min=2117, max=17637, avg=12184.38, stdev=1420.98 00:12:07.073 clat percentiles (usec): 00:12:07.073 | 1.00th=[ 6849], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:12:07.073 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:12:07.073 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:12:07.073 | 99.00th=[15664], 99.50th=[15795], 99.90th=[16450], 99.95th=[16909], 00:12:07.073 | 99.99th=[17695] 00:12:07.073 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:12:07.073 slat (usec): min=14, max=4935, avg=85.75, stdev=467.00 00:12:07.073 clat (usec): min=6378, max=17478, avg=11366.24, stdev=1278.13 00:12:07.073 lat (usec): min=6404, max=17522, avg=11451.99, stdev=1353.18 00:12:07.073 clat percentiles (usec): 00:12:07.073 | 1.00th=[ 7635], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10552], 00:12:07.073 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11469], 00:12:07.073 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:12:07.073 | 99.00th=[15008], 99.50th=[16057], 99.90th=[17171], 99.95th=[17433], 00:12:07.073 | 99.99th=[17433] 00:12:07.073 bw ( KiB/s): min=20480, max=24072, per=31.80%, avg=22276.00, stdev=2539.93, samples=2 00:12:07.073 iops : min= 5120, max= 6018, avg=5569.00, stdev=634.98, samples=2 00:12:07.073 lat (usec) : 750=0.01% 00:12:07.073 lat (msec) : 4=0.18%, 10=5.67%, 20=94.15% 00:12:07.073 cpu : usr=5.79%, sys=16.68%, ctx=327, majf=0, minf=12 00:12:07.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:07.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.073 issued rwts: total=5184,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.073 job1: (groupid=0, jobs=1): err= 0: pid=66910: Mon Dec 9 04:01:48 2024 00:12:07.073 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:12:07.073 slat (usec): min=5, max=9271, avg=148.93, stdev=633.02 00:12:07.073 clat (usec): min=9383, max=34380, avg=18926.64, stdev=6863.57 00:12:07.073 lat (usec): min=9544, max=34405, avg=19075.57, stdev=6897.06 00:12:07.073 clat percentiles (usec): 00:12:07.073 | 1.00th=[10290], 5.00th=[12518], 10.00th=[12780], 20.00th=[13042], 00:12:07.073 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13960], 60.00th=[20841], 00:12:07.073 | 70.00th=[24773], 80.00th=[27132], 90.00th=[28967], 95.00th=[30016], 00:12:07.073 | 99.00th=[32637], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:12:07.073 | 99.99th=[34341] 00:12:07.073 write: IOPS=3620, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1003msec); 0 zone resets 00:12:07.073 slat (usec): min=8, max=5399, avg=121.42, stdev=468.91 00:12:07.073 clat (usec): min=221, max=32628, avg=16210.60, stdev=4788.49 00:12:07.073 lat (usec): min=3160, max=32646, avg=16332.02, stdev=4800.33 00:12:07.073 clat percentiles (usec): 00:12:07.073 | 1.00th=[ 6587], 5.00th=[12256], 10.00th=[12518], 20.00th=[12649], 00:12:07.073 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[17433], 00:12:07.073 | 70.00th=[19268], 80.00th=[20841], 90.00th=[22414], 95.00th=[24511], 00:12:07.073 | 99.00th=[28967], 99.50th=[31327], 99.90th=[32637], 99.95th=[32637], 00:12:07.073 | 99.99th=[32637] 00:12:07.073 bw ( KiB/s): min= 9472, max=19238, per=20.49%, avg=14355.00, stdev=6905.60, samples=2 00:12:07.073 iops : min= 2368, max= 4809, avg=3588.50, stdev=1726.05, samples=2 00:12:07.073 lat (usec) : 250=0.01% 00:12:07.073 lat (msec) : 4=0.44%, 10=0.61%, 20=65.79%, 50=33.14% 00:12:07.073 cpu : usr=3.49%, sys=10.28%, ctx=675, majf=0, minf=12 00:12:07.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:07.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.073 issued rwts: total=3584,3631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.073 job2: (groupid=0, jobs=1): err= 0: pid=66911: Mon Dec 9 04:01:48 2024 00:12:07.073 read: IOPS=3179, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1006msec) 00:12:07.073 slat (usec): min=8, max=7778, avg=159.66, stdev=658.31 00:12:07.073 clat (usec): min=1720, max=35058, avg=20142.56, stdev=6284.69 00:12:07.073 lat (usec): min=4384, max=35081, avg=20302.22, stdev=6308.85 00:12:07.073 clat percentiles (usec): 00:12:07.073 | 1.00th=[ 7963], 5.00th=[14091], 10.00th=[14353], 20.00th=[14615], 00:12:07.073 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15533], 60.00th=[23462], 00:12:07.073 | 70.00th=[25035], 80.00th=[26608], 90.00th=[28705], 95.00th=[30278], 00:12:07.073 | 99.00th=[32375], 99.50th=[32375], 99.90th=[34866], 99.95th=[34866], 00:12:07.073 | 99.99th=[34866] 00:12:07.073 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:12:07.073 slat (usec): min=9, max=5273, avg=129.64, stdev=502.09 00:12:07.073 clat (usec): min=10667, max=27750, avg=17392.83, stdev=3906.10 00:12:07.073 lat (usec): min=12211, max=27776, avg=17522.46, stdev=3909.26 00:12:07.073 clat percentiles (usec): 00:12:07.073 | 1.00th=[11731], 5.00th=[13698], 10.00th=[13829], 20.00th=[14222], 00:12:07.073 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[17957], 00:12:07.073 | 70.00th=[19792], 80.00th=[21103], 90.00th=[23200], 95.00th=[25297], 00:12:07.073 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27657], 99.95th=[27657], 00:12:07.073 | 99.99th=[27657] 00:12:07.073 bw ( KiB/s): min=12280, max=16416, per=20.48%, avg=14348.00, stdev=2924.59, samples=2 00:12:07.073 iops : min= 3070, max= 4104, avg=3587.00, stdev=731.15, samples=2 00:12:07.073 lat (msec) : 2=0.01%, 10=0.47%, 20=61.76%, 50=37.76% 00:12:07.073 cpu : usr=2.79%, sys=10.75%, ctx=669, majf=0, minf=13 00:12:07.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:07.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.073 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.073 job3: (groupid=0, jobs=1): err= 0: pid=66912: Mon Dec 9 04:01:48 2024 00:12:07.073 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:12:07.073 slat (usec): min=6, max=3957, avg=103.13, stdev=486.77 00:12:07.073 clat (usec): min=9818, max=15806, avg=13703.42, stdev=1071.94 00:12:07.073 lat (usec): min=12139, max=15848, avg=13806.55, stdev=967.40 00:12:07.073 clat percentiles (usec): 00:12:07.073 | 1.00th=[10552], 5.00th=[12256], 10.00th=[12518], 20.00th=[12911], 00:12:07.073 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[13960], 00:12:07.073 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15270], 95.00th=[15401], 00:12:07.073 | 99.00th=[15533], 99.50th=[15533], 99.90th=[15664], 99.95th=[15795], 00:12:07.073 | 99.99th=[15795] 00:12:07.073 write: IOPS=4759, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1002msec); 0 zone resets 00:12:07.073 slat (usec): min=10, max=3298, avg=102.27, stdev=444.50 00:12:07.073 clat (usec): min=205, max=15854, avg=13273.11, stdev=1614.65 00:12:07.073 lat (usec): min=2335, max=15878, avg=13375.38, stdev=1556.17 00:12:07.073 clat percentiles (usec): 00:12:07.073 | 1.00th=[ 5932], 5.00th=[11863], 10.00th=[12256], 20.00th=[12518], 00:12:07.073 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:12:07.073 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15270], 95.00th=[15401], 00:12:07.073 | 99.00th=[15795], 99.50th=[15795], 99.90th=[15795], 99.95th=[15795], 00:12:07.073 | 99.99th=[15795] 00:12:07.073 bw ( KiB/s): min=17928, max=19200, per=26.50%, avg=18564.00, stdev=899.44, samples=2 00:12:07.073 iops : min= 4482, max= 4800, avg=4641.00, stdev=224.86, samples=2 00:12:07.073 lat (usec) : 250=0.01% 00:12:07.073 lat (msec) : 4=0.34%, 10=0.90%, 20=98.75% 00:12:07.073 cpu : usr=4.70%, sys=13.49%, ctx=295, majf=0, minf=11 00:12:07.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:07.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.073 issued rwts: total=4608,4769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.073 00:12:07.073 Run status group 0 (all jobs): 00:12:07.073 READ: bw=64.4MiB/s (67.5MB/s), 12.4MiB/s-20.2MiB/s (13.0MB/s-21.2MB/s), io=64.7MiB (67.9MB), run=1002-1006msec 00:12:07.073 WRITE: bw=68.4MiB/s (71.7MB/s), 13.9MiB/s-22.0MiB/s (14.6MB/s-23.0MB/s), io=68.8MiB (72.2MB), run=1002-1006msec 00:12:07.073 00:12:07.073 Disk stats (read/write): 00:12:07.073 nvme0n1: ios=4547/4608, merge=0/0, ticks=26285/21700, in_queue=47985, util=87.98% 00:12:07.073 nvme0n2: ios=3111/3356, merge=0/0, ticks=12933/11725, in_queue=24658, util=88.25% 00:12:07.073 nvme0n3: ios=2883/3072, merge=0/0, ticks=13200/11515, in_queue=24715, util=88.92% 00:12:07.073 nvme0n4: ios=3840/4096, merge=0/0, ticks=12056/12186, in_queue=24242, util=89.57% 00:12:07.074 04:01:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:07.074 [global] 00:12:07.074 thread=1 00:12:07.074 invalidate=1 00:12:07.074 rw=randwrite 00:12:07.074 time_based=1 00:12:07.074 runtime=1 00:12:07.074 ioengine=libaio 00:12:07.074 direct=1 00:12:07.074 bs=4096 00:12:07.074 iodepth=128 00:12:07.074 norandommap=0 00:12:07.074 numjobs=1 00:12:07.074 00:12:07.074 verify_dump=1 00:12:07.074 verify_backlog=512 00:12:07.074 verify_state_save=0 00:12:07.074 do_verify=1 00:12:07.074 verify=crc32c-intel 00:12:07.074 [job0] 00:12:07.074 filename=/dev/nvme0n1 00:12:07.074 [job1] 00:12:07.074 filename=/dev/nvme0n2 00:12:07.074 [job2] 00:12:07.074 filename=/dev/nvme0n3 00:12:07.074 [job3] 00:12:07.074 filename=/dev/nvme0n4 00:12:07.074 Could not set queue depth (nvme0n1) 00:12:07.074 Could not set queue depth (nvme0n2) 00:12:07.074 Could not set queue depth (nvme0n3) 00:12:07.074 Could not set queue depth (nvme0n4) 00:12:07.074 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.074 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.074 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.074 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.074 fio-3.35 00:12:07.074 Starting 4 threads 00:12:08.450 00:12:08.450 job0: (groupid=0, jobs=1): err= 0: pid=66965: Mon Dec 9 04:01:50 2024 00:12:08.450 read: IOPS=1244, BW=4978KiB/s (5098kB/s)(4988KiB/1002msec) 00:12:08.450 slat (usec): min=10, max=22400, avg=478.27, stdev=1710.67 00:12:08.450 clat (usec): min=687, max=82461, avg=57249.68, stdev=16324.42 00:12:08.450 lat (usec): min=1948, max=82493, avg=57727.95, stdev=16341.89 00:12:08.450 clat percentiles (usec): 00:12:08.450 | 1.00th=[ 3884], 5.00th=[20579], 10.00th=[35390], 20.00th=[46400], 00:12:08.450 | 30.00th=[51643], 40.00th=[55837], 50.00th=[59507], 60.00th=[63701], 00:12:08.450 | 70.00th=[67634], 80.00th=[71828], 90.00th=[74974], 95.00th=[77071], 00:12:08.450 | 99.00th=[79168], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:12:08.450 | 99.99th=[82314] 00:12:08.450 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:12:08.450 slat (usec): min=13, max=20255, avg=254.35, stdev=1288.33 00:12:08.450 clat (usec): min=14985, max=79989, avg=35212.03, stdev=16660.25 00:12:08.450 lat (usec): min=15023, max=80049, avg=35466.38, stdev=16705.28 00:12:08.450 clat percentiles (usec): 00:12:08.450 | 1.00th=[15139], 5.00th=[15401], 10.00th=[15664], 20.00th=[20579], 00:12:08.450 | 30.00th=[26346], 40.00th=[29492], 50.00th=[30278], 60.00th=[32900], 00:12:08.450 | 70.00th=[38011], 80.00th=[47449], 90.00th=[62653], 95.00th=[73925], 00:12:08.450 | 99.00th=[79168], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:12:08.450 | 99.99th=[80217] 00:12:08.450 bw ( KiB/s): min= 7520, max= 7520, per=21.71%, avg=7520.00, stdev= 0.00, samples=1 00:12:08.450 iops : min= 1880, max= 1880, avg=1880.00, stdev= 0.00, samples=1 00:12:08.450 lat (usec) : 750=0.04% 00:12:08.450 lat (msec) : 2=0.11%, 4=0.50%, 10=0.36%, 20=9.49%, 50=45.56% 00:12:08.450 lat (msec) : 100=43.95% 00:12:08.450 cpu : usr=1.30%, sys=5.19%, ctx=381, majf=0, minf=9 00:12:08.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:12:08.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.450 issued rwts: total=1247,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.450 job1: (groupid=0, jobs=1): err= 0: pid=66966: Mon Dec 9 04:01:50 2024 00:12:08.450 read: IOPS=1125, BW=4502KiB/s (4611kB/s)(4516KiB/1003msec) 00:12:08.450 slat (usec): min=6, max=23923, avg=473.14, stdev=1666.88 00:12:08.450 clat (usec): min=2041, max=93450, avg=58291.07, stdev=16291.78 00:12:08.450 lat (usec): min=3416, max=93479, avg=58764.21, stdev=16275.83 00:12:08.450 clat percentiles (usec): 00:12:08.450 | 1.00th=[12518], 5.00th=[30016], 10.00th=[35914], 20.00th=[46924], 00:12:08.450 | 30.00th=[50070], 40.00th=[55837], 50.00th=[59507], 60.00th=[63701], 00:12:08.450 | 70.00th=[67634], 80.00th=[71828], 90.00th=[77071], 95.00th=[82314], 00:12:08.450 | 99.00th=[87557], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:12:08.450 | 99.99th=[93848] 00:12:08.450 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:12:08.450 slat (usec): min=14, max=25056, avg=295.15, stdev=1413.21 00:12:08.450 clat (usec): min=17438, max=71254, avg=39013.88, stdev=12906.56 00:12:08.450 lat (usec): min=24941, max=73496, avg=39309.03, stdev=12958.10 00:12:08.450 clat percentiles (usec): 00:12:08.450 | 1.00th=[25035], 5.00th=[25560], 10.00th=[25822], 20.00th=[29754], 00:12:08.450 | 30.00th=[30278], 40.00th=[30540], 50.00th=[31065], 60.00th=[38536], 00:12:08.450 | 70.00th=[45876], 80.00th=[52167], 90.00th=[58983], 95.00th=[65274], 00:12:08.450 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:12:08.450 | 99.99th=[70779] 00:12:08.450 bw ( KiB/s): min= 5868, max= 6236, per=17.47%, avg=6052.00, stdev=260.22, samples=2 00:12:08.450 iops : min= 1467, max= 1559, avg=1513.00, stdev=65.05, samples=2 00:12:08.450 lat (msec) : 4=0.26%, 10=0.08%, 20=1.28%, 50=54.48%, 100=43.90% 00:12:08.450 cpu : usr=1.20%, sys=5.59%, ctx=382, majf=0, minf=23 00:12:08.451 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:12:08.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.451 issued rwts: total=1129,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.451 job2: (groupid=0, jobs=1): err= 0: pid=66967: Mon Dec 9 04:01:50 2024 00:12:08.451 read: IOPS=4068, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:12:08.451 slat (usec): min=4, max=16869, avg=142.67, stdev=975.11 00:12:08.451 clat (usec): min=2139, max=44512, avg=19546.79, stdev=5390.58 00:12:08.451 lat (usec): min=7443, max=44549, avg=19689.46, stdev=5455.97 00:12:08.451 clat percentiles (usec): 00:12:08.451 | 1.00th=[ 8291], 5.00th=[13173], 10.00th=[15139], 20.00th=[16188], 00:12:08.451 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 00:12:08.451 | 70.00th=[20317], 80.00th=[23987], 90.00th=[29754], 95.00th=[31065], 00:12:08.451 | 99.00th=[32113], 99.50th=[32113], 99.90th=[39584], 99.95th=[43254], 00:12:08.451 | 99.99th=[44303] 00:12:08.451 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:12:08.451 slat (usec): min=11, max=10132, avg=93.64, stdev=545.35 00:12:08.451 clat (usec): min=4973, max=24107, avg=11549.31, stdev=2753.51 00:12:08.451 lat (usec): min=7052, max=24153, avg=11642.95, stdev=2723.59 00:12:08.451 clat percentiles (usec): 00:12:08.451 | 1.00th=[ 7242], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9634], 00:12:08.451 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:12:08.451 | 70.00th=[11863], 80.00th=[12911], 90.00th=[16712], 95.00th=[17433], 00:12:08.451 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:12:08.451 | 99.99th=[23987] 00:12:08.451 bw ( KiB/s): min=16351, max=16384, per=47.25%, avg=16367.50, stdev=23.33, samples=2 00:12:08.451 iops : min= 4087, max= 4096, avg=4091.50, stdev= 6.36, samples=2 00:12:08.451 lat (msec) : 4=0.01%, 10=13.55%, 20=69.74%, 50=16.70% 00:12:08.451 cpu : usr=4.38%, sys=12.05%, ctx=174, majf=0, minf=9 00:12:08.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:08.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.451 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.451 job3: (groupid=0, jobs=1): err= 0: pid=66968: Mon Dec 9 04:01:50 2024 00:12:08.451 read: IOPS=1253, BW=5016KiB/s (5136kB/s)(5036KiB/1004msec) 00:12:08.451 slat (usec): min=6, max=17337, avg=471.35, stdev=1643.13 00:12:08.451 clat (usec): min=2839, max=78840, avg=57750.57, stdev=14276.69 00:12:08.451 lat (usec): min=7097, max=78870, avg=58221.92, stdev=14274.76 00:12:08.451 clat percentiles (usec): 00:12:08.451 | 1.00th=[12387], 5.00th=[29492], 10.00th=[42730], 20.00th=[48497], 00:12:08.451 | 30.00th=[51643], 40.00th=[54789], 50.00th=[57410], 60.00th=[62129], 00:12:08.451 | 70.00th=[68682], 80.00th=[71828], 90.00th=[74974], 95.00th=[76022], 00:12:08.451 | 99.00th=[77071], 99.50th=[78119], 99.90th=[78119], 99.95th=[79168], 00:12:08.451 | 99.99th=[79168] 00:12:08.451 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:12:08.451 slat (usec): min=12, max=14159, avg=256.31, stdev=1226.90 00:12:08.451 clat (usec): min=11382, max=75887, avg=35496.42, stdev=14783.36 00:12:08.451 lat (usec): min=15309, max=75909, avg=35752.73, stdev=14821.69 00:12:08.451 clat percentiles (usec): 00:12:08.451 | 1.00th=[15270], 5.00th=[16712], 10.00th=[20317], 20.00th=[23987], 00:12:08.451 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30540], 60.00th=[32113], 00:12:08.451 | 70.00th=[38011], 80.00th=[43779], 90.00th=[63177], 95.00th=[68682], 00:12:08.451 | 99.00th=[73925], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:12:08.451 | 99.99th=[76022] 00:12:08.451 bw ( KiB/s): min= 4926, max= 7352, per=17.72%, avg=6139.00, stdev=1715.44, samples=2 00:12:08.451 iops : min= 1231, max= 1838, avg=1534.50, stdev=429.21, samples=2 00:12:08.451 lat (msec) : 4=0.04%, 10=0.36%, 20=6.26%, 50=49.95%, 100=43.40% 00:12:08.451 cpu : usr=1.79%, sys=4.89%, ctx=389, majf=0, minf=9 00:12:08.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:12:08.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.451 issued rwts: total=1259,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.451 00:12:08.451 Run status group 0 (all jobs): 00:12:08.451 READ: bw=30.0MiB/s (31.5MB/s), 4502KiB/s-15.9MiB/s (4611kB/s-16.7MB/s), io=30.2MiB (31.6MB), run=1002-1005msec 00:12:08.451 WRITE: bw=33.8MiB/s (35.5MB/s), 6120KiB/s-15.9MiB/s (6266kB/s-16.7MB/s), io=34.0MiB (35.7MB), run=1002-1005msec 00:12:08.451 00:12:08.451 Disk stats (read/write): 00:12:08.451 nvme0n1: ios=1074/1406, merge=0/0, ticks=15703/11785, in_queue=27488, util=87.46% 00:12:08.451 nvme0n2: ios=1073/1248, merge=0/0, ticks=15863/10737, in_queue=26600, util=88.66% 00:12:08.451 nvme0n3: ios=3340/3584, merge=0/0, ticks=61841/38493, in_queue=100334, util=89.16% 00:12:08.451 nvme0n4: ios=1024/1421, merge=0/0, ticks=15618/12952, in_queue=28570, util=89.51% 00:12:08.451 04:01:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:08.451 04:01:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66981 00:12:08.451 04:01:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:08.451 04:01:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:08.451 [global] 00:12:08.451 thread=1 00:12:08.451 invalidate=1 00:12:08.451 rw=read 00:12:08.451 time_based=1 00:12:08.451 runtime=10 00:12:08.451 ioengine=libaio 00:12:08.451 direct=1 00:12:08.451 bs=4096 00:12:08.451 iodepth=1 00:12:08.451 norandommap=1 00:12:08.451 numjobs=1 00:12:08.451 00:12:08.451 [job0] 00:12:08.451 filename=/dev/nvme0n1 00:12:08.451 [job1] 00:12:08.451 filename=/dev/nvme0n2 00:12:08.451 [job2] 00:12:08.451 filename=/dev/nvme0n3 00:12:08.451 [job3] 00:12:08.451 filename=/dev/nvme0n4 00:12:08.451 Could not set queue depth (nvme0n1) 00:12:08.451 Could not set queue depth (nvme0n2) 00:12:08.451 Could not set queue depth (nvme0n3) 00:12:08.451 Could not set queue depth (nvme0n4) 00:12:08.451 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.451 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.451 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.451 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.451 fio-3.35 00:12:08.451 Starting 4 threads 00:12:11.738 04:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:11.738 fio: pid=67029, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.738 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45084672, buflen=4096 00:12:11.738 04:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:11.738 fio: pid=67028, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.738 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=65814528, buflen=4096 00:12:11.996 04:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.996 04:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:11.996 fio: pid=67026, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:11.996 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11317248, buflen=4096 00:12:12.262 04:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.262 04:01:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:12.522 fio: pid=67027, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:12.522 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=64241664, buflen=4096 00:12:12.522 00:12:12.522 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67026: Mon Dec 9 04:01:54 2024 00:12:12.522 read: IOPS=5430, BW=21.2MiB/s (22.2MB/s)(74.8MiB/3526msec) 00:12:12.522 slat (usec): min=10, max=13695, avg=16.04, stdev=146.36 00:12:12.522 clat (usec): min=128, max=3945, avg=166.90, stdev=47.07 00:12:12.522 lat (usec): min=141, max=13879, avg=182.94, stdev=155.84 00:12:12.522 clat percentiles (usec): 00:12:12.522 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:12:12.522 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:12:12.522 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 198], 00:12:12.522 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 734], 99.95th=[ 963], 00:12:12.522 | 99.99th=[ 2008] 00:12:12.522 bw ( KiB/s): min=21888, max=23024, per=35.31%, avg=22549.67, stdev=450.08, samples=6 00:12:12.522 iops : min= 5472, max= 5756, avg=5637.33, stdev=112.51, samples=6 00:12:12.522 lat (usec) : 250=98.52%, 500=1.28%, 750=0.11%, 1000=0.04% 00:12:12.522 lat (msec) : 2=0.04%, 4=0.01% 00:12:12.522 cpu : usr=1.25%, sys=6.78%, ctx=19154, majf=0, minf=1 00:12:12.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.522 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.522 issued rwts: total=19148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.522 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67027: Mon Dec 9 04:01:54 2024 00:12:12.522 read: IOPS=4044, BW=15.8MiB/s (16.6MB/s)(61.3MiB/3878msec) 00:12:12.522 slat (usec): min=8, max=9839, avg=17.08, stdev=155.46 00:12:12.522 clat (usec): min=3, max=3049, avg=228.67, stdev=64.93 00:12:12.522 lat (usec): min=138, max=10106, avg=245.75, stdev=168.02 00:12:12.522 clat percentiles (usec): 00:12:12.522 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 161], 00:12:12.522 | 30.00th=[ 227], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:12:12.522 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:12:12.522 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 445], 99.95th=[ 1037], 00:12:12.522 | 99.99th=[ 3032] 00:12:12.522 bw ( KiB/s): min=14440, max=20785, per=24.29%, avg=15510.14, stdev=2329.38, samples=7 00:12:12.522 iops : min= 3610, max= 5196, avg=3877.43, stdev=582.28, samples=7 00:12:12.522 lat (usec) : 4=0.03%, 10=0.01%, 50=0.01%, 100=0.02%, 250=52.50% 00:12:12.522 lat (usec) : 500=47.34%, 750=0.03% 00:12:12.522 lat (msec) : 2=0.03%, 4=0.02% 00:12:12.522 cpu : usr=1.13%, sys=5.55%, ctx=15732, majf=0, minf=1 00:12:12.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.522 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.522 issued rwts: total=15685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.522 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67028: Mon Dec 9 04:01:54 2024 00:12:12.522 read: IOPS=4905, BW=19.2MiB/s (20.1MB/s)(62.8MiB/3276msec) 00:12:12.522 slat (usec): min=10, max=15391, avg=15.88, stdev=151.38 00:12:12.522 clat (usec): min=143, max=3973, avg=186.68, stdev=58.19 00:12:12.522 lat (usec): min=155, max=15914, avg=202.56, stdev=164.56 00:12:12.522 clat percentiles (usec): 00:12:12.522 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:12:12.522 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:12:12.522 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 233], 00:12:12.522 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 750], 99.95th=[ 930], 00:12:12.522 | 99.99th=[ 3392] 00:12:12.522 bw ( KiB/s): min=18896, max=21608, per=31.64%, avg=20205.33, stdev=1175.14, samples=6 00:12:12.522 iops : min= 4724, max= 5402, avg=5051.33, stdev=293.79, samples=6 00:12:12.522 lat (usec) : 250=97.49%, 500=2.30%, 750=0.09%, 1000=0.06% 00:12:12.522 lat (msec) : 2=0.02%, 4=0.02% 00:12:12.522 cpu : usr=1.74%, sys=5.89%, ctx=16074, majf=0, minf=2 00:12:12.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.522 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.522 issued rwts: total=16069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.522 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67029: Mon Dec 9 04:01:54 2024 00:12:12.522 read: IOPS=3648, BW=14.3MiB/s (14.9MB/s)(43.0MiB/3017msec) 00:12:12.522 slat (usec): min=8, max=112, avg=11.60, stdev= 4.16 00:12:12.522 clat (usec): min=188, max=2980, avg=261.28, stdev=41.83 00:12:12.522 lat (usec): min=205, max=2997, avg=272.89, stdev=42.46 00:12:12.522 clat percentiles (usec): 00:12:12.522 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:12:12.522 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:12:12.522 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:12:12.522 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 424], 99.95th=[ 1123], 00:12:12.522 | 99.99th=[ 2212] 00:12:12.522 bw ( KiB/s): min=14432, max=14840, per=22.91%, avg=14629.67, stdev=142.26, samples=6 00:12:12.522 iops : min= 3608, max= 3710, avg=3657.33, stdev=35.52, samples=6 00:12:12.522 lat (usec) : 250=26.28%, 500=73.62%, 750=0.04% 00:12:12.522 lat (msec) : 2=0.04%, 4=0.02% 00:12:12.522 cpu : usr=1.06%, sys=3.78%, ctx=11013, majf=0, minf=2 00:12:12.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.522 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.522 issued rwts: total=11008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.522 00:12:12.522 Run status group 0 (all jobs): 00:12:12.522 READ: bw=62.4MiB/s (65.4MB/s), 14.3MiB/s-21.2MiB/s (14.9MB/s-22.2MB/s), io=242MiB (254MB), run=3017-3878msec 00:12:12.522 00:12:12.522 Disk stats (read/write): 00:12:12.522 nvme0n1: ios=18219/0, merge=0/0, ticks=3123/0, in_queue=3123, util=95.33% 00:12:12.523 nvme0n2: ios=15635/0, merge=0/0, ticks=3586/0, in_queue=3586, util=95.96% 00:12:12.523 nvme0n3: ios=15469/0, merge=0/0, ticks=2894/0, in_queue=2894, util=96.02% 00:12:12.523 nvme0n4: ios=10469/0, merge=0/0, ticks=2583/0, in_queue=2583, util=96.79% 00:12:12.523 04:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.523 04:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:12.781 04:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.781 04:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:13.348 04:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.348 04:01:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:13.348 04:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.348 04:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:13.913 04:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.913 04:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:14.171 04:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:14.171 04:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66981 00:12:14.171 04:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:14.171 04:01:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.171 nvmf hotplug test: fio failed as expected 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:14.171 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.428 rmmod nvme_tcp 00:12:14.428 rmmod nvme_fabrics 00:12:14.428 rmmod nvme_keyring 00:12:14.428 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66596 ']' 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66596 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66596 ']' 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66596 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66596 00:12:14.685 killing process with pid 66596 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66596' 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66596 00:12:14.685 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66596 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:14.943 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:15.201 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:15.201 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.201 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.201 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:15.201 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.201 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.201 04:01:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.201 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:15.201 00:12:15.201 real 0m20.768s 00:12:15.201 user 1m17.500s 00:12:15.201 sys 0m10.862s 00:12:15.201 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.201 ************************************ 00:12:15.201 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.201 END TEST nvmf_fio_target 00:12:15.201 ************************************ 00:12:15.201 04:01:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:15.201 04:01:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.201 04:01:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.201 04:01:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:15.201 ************************************ 00:12:15.201 START TEST nvmf_bdevio 00:12:15.201 ************************************ 00:12:15.201 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:15.201 * Looking for test storage... 00:12:15.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:15.459 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.460 --rc genhtml_branch_coverage=1 00:12:15.460 --rc genhtml_function_coverage=1 00:12:15.460 --rc genhtml_legend=1 00:12:15.460 --rc geninfo_all_blocks=1 00:12:15.460 --rc geninfo_unexecuted_blocks=1 00:12:15.460 00:12:15.460 ' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.460 --rc genhtml_branch_coverage=1 00:12:15.460 --rc genhtml_function_coverage=1 00:12:15.460 --rc genhtml_legend=1 00:12:15.460 --rc geninfo_all_blocks=1 00:12:15.460 --rc geninfo_unexecuted_blocks=1 00:12:15.460 00:12:15.460 ' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.460 --rc genhtml_branch_coverage=1 00:12:15.460 --rc genhtml_function_coverage=1 00:12:15.460 --rc genhtml_legend=1 00:12:15.460 --rc geninfo_all_blocks=1 00:12:15.460 --rc geninfo_unexecuted_blocks=1 00:12:15.460 00:12:15.460 ' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.460 --rc genhtml_branch_coverage=1 00:12:15.460 --rc genhtml_function_coverage=1 00:12:15.460 --rc genhtml_legend=1 00:12:15.460 --rc geninfo_all_blocks=1 00:12:15.460 --rc geninfo_unexecuted_blocks=1 00:12:15.460 00:12:15.460 ' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.460 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.460 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:15.461 Cannot find device "nvmf_init_br" 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:15.461 Cannot find device "nvmf_init_br2" 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:15.461 Cannot find device "nvmf_tgt_br" 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.461 Cannot find device "nvmf_tgt_br2" 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:15.461 Cannot find device "nvmf_init_br" 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:15.461 Cannot find device "nvmf_init_br2" 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:15.461 Cannot find device "nvmf_tgt_br" 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:15.461 Cannot find device "nvmf_tgt_br2" 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:15.461 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:15.725 Cannot find device "nvmf_br" 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:15.725 Cannot find device "nvmf_init_if" 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:15.725 Cannot find device "nvmf_init_if2" 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.725 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:15.993 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.993 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.122 ms 00:12:15.993 00:12:15.993 --- 10.0.0.3 ping statistics --- 00:12:15.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.993 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:15.993 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:15.993 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:12:15.993 00:12:15.993 --- 10.0.0.4 ping statistics --- 00:12:15.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.993 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:15.993 00:12:15.993 --- 10.0.0.1 ping statistics --- 00:12:15.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.993 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:15.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:12:15.993 00:12:15.993 --- 10.0.0.2 ping statistics --- 00:12:15.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.993 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67359 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67359 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67359 ']' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.993 04:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.993 [2024-12-09 04:01:57.802570] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:12:15.993 [2024-12-09 04:01:57.802702] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.251 [2024-12-09 04:01:57.963763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.251 [2024-12-09 04:01:58.064340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.251 [2024-12-09 04:01:58.064406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.251 [2024-12-09 04:01:58.064420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.251 [2024-12-09 04:01:58.064431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.251 [2024-12-09 04:01:58.064441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.251 [2024-12-09 04:01:58.066562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:16.251 [2024-12-09 04:01:58.066705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:16.251 [2024-12-09 04:01:58.066862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:16.251 [2024-12-09 04:01:58.067490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.251 [2024-12-09 04:01:58.148576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.183 [2024-12-09 04:01:58.926262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.183 Malloc0 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.183 04:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.183 [2024-12-09 04:01:59.007435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:17.183 { 00:12:17.183 "params": { 00:12:17.183 "name": "Nvme$subsystem", 00:12:17.183 "trtype": "$TEST_TRANSPORT", 00:12:17.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:17.183 "adrfam": "ipv4", 00:12:17.183 "trsvcid": "$NVMF_PORT", 00:12:17.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:17.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:17.183 "hdgst": ${hdgst:-false}, 00:12:17.183 "ddgst": ${ddgst:-false} 00:12:17.183 }, 00:12:17.183 "method": "bdev_nvme_attach_controller" 00:12:17.183 } 00:12:17.183 EOF 00:12:17.183 )") 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:17.183 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:17.183 "params": { 00:12:17.183 "name": "Nvme1", 00:12:17.183 "trtype": "tcp", 00:12:17.183 "traddr": "10.0.0.3", 00:12:17.183 "adrfam": "ipv4", 00:12:17.183 "trsvcid": "4420", 00:12:17.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:17.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:17.183 "hdgst": false, 00:12:17.183 "ddgst": false 00:12:17.184 }, 00:12:17.184 "method": "bdev_nvme_attach_controller" 00:12:17.184 }' 00:12:17.184 [2024-12-09 04:01:59.062714] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:12:17.184 [2024-12-09 04:01:59.062813] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67399 ] 00:12:17.442 [2024-12-09 04:01:59.210950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:17.442 [2024-12-09 04:01:59.296907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.442 [2024-12-09 04:01:59.297063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.442 [2024-12-09 04:01:59.297059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.442 [2024-12-09 04:01:59.381630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.700 I/O targets: 00:12:17.700 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:17.700 00:12:17.700 00:12:17.700 CUnit - A unit testing framework for C - Version 2.1-3 00:12:17.700 http://cunit.sourceforge.net/ 00:12:17.700 00:12:17.700 00:12:17.700 Suite: bdevio tests on: Nvme1n1 00:12:17.700 Test: blockdev write read block ...passed 00:12:17.700 Test: blockdev write zeroes read block ...passed 00:12:17.700 Test: blockdev write zeroes read no split ...passed 00:12:17.700 Test: blockdev write zeroes read split ...passed 00:12:17.700 Test: blockdev write zeroes read split partial ...passed 00:12:17.700 Test: blockdev reset ...[2024-12-09 04:01:59.556565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:17.700 [2024-12-09 04:01:59.556693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdb80 (9): Bad file descriptor 00:12:17.700 [2024-12-09 04:01:59.569648] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:17.700 passed 00:12:17.700 Test: blockdev write read 8 blocks ...passed 00:12:17.700 Test: blockdev write read size > 128k ...passed 00:12:17.700 Test: blockdev write read invalid size ...passed 00:12:17.700 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.700 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.700 Test: blockdev write read max offset ...passed 00:12:17.700 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.700 Test: blockdev writev readv 8 blocks ...passed 00:12:17.700 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.700 Test: blockdev writev readv block ...passed 00:12:17.700 Test: blockdev writev readv size > 128k ...passed 00:12:17.700 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.700 Test: blockdev comparev and writev ...[2024-12-09 04:01:59.577717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.700 [2024-12-09 04:01:59.577761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.577784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.700 [2024-12-09 04:01:59.577795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.578407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.700 [2024-12-09 04:01:59.578439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.578458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.700 [2024-12-09 04:01:59.578469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.578894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.700 [2024-12-09 04:01:59.578923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.578942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.700 [2024-12-09 04:01:59.578953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.579388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.700 [2024-12-09 04:01:59.579417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.579436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.700 [2024-12-09 04:01:59.579446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:17.700 passed 00:12:17.700 Test: blockdev nvme passthru rw ...passed 00:12:17.700 Test: blockdev nvme passthru vendor specific ...[2024-12-09 04:01:59.580504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.700 [2024-12-09 04:01:59.580532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.580732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.700 [2024-12-09 04:01:59.580757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.580917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.700 [2024-12-09 04:01:59.580945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:17.700 [2024-12-09 04:01:59.581119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.700 [2024-12-09 04:01:59.581143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:17.700 passed 00:12:17.700 Test: blockdev nvme admin passthru ...passed 00:12:17.700 Test: blockdev copy ...passed 00:12:17.700 00:12:17.700 Run Summary: Type Total Ran Passed Failed Inactive 00:12:17.700 suites 1 1 n/a 0 0 00:12:17.700 tests 23 23 23 0 0 00:12:17.700 asserts 152 152 152 0 n/a 00:12:17.700 00:12:17.700 Elapsed time = 0.153 seconds 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.958 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.958 rmmod nvme_tcp 00:12:18.216 rmmod nvme_fabrics 00:12:18.216 rmmod nvme_keyring 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67359 ']' 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67359 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67359 ']' 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67359 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.216 04:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67359 00:12:18.216 killing process with pid 67359 00:12:18.216 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:18.216 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:18.216 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67359' 00:12:18.216 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67359 00:12:18.216 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67359 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:18.473 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:18.747 00:12:18.747 real 0m3.488s 00:12:18.747 user 0m10.527s 00:12:18.747 sys 0m1.088s 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.747 ************************************ 00:12:18.747 END TEST nvmf_bdevio 00:12:18.747 ************************************ 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:18.747 ************************************ 00:12:18.747 END TEST nvmf_target_core 00:12:18.747 ************************************ 00:12:18.747 00:12:18.747 real 2m41.125s 00:12:18.747 user 7m4.058s 00:12:18.747 sys 0m55.941s 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:18.747 04:02:00 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:18.747 04:02:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.747 04:02:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.747 04:02:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.747 ************************************ 00:12:18.747 START TEST nvmf_target_extra 00:12:18.747 ************************************ 00:12:18.747 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:19.007 * Looking for test storage... 00:12:19.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.007 --rc genhtml_branch_coverage=1 00:12:19.007 --rc genhtml_function_coverage=1 00:12:19.007 --rc genhtml_legend=1 00:12:19.007 --rc geninfo_all_blocks=1 00:12:19.007 --rc geninfo_unexecuted_blocks=1 00:12:19.007 00:12:19.007 ' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.007 --rc genhtml_branch_coverage=1 00:12:19.007 --rc genhtml_function_coverage=1 00:12:19.007 --rc genhtml_legend=1 00:12:19.007 --rc geninfo_all_blocks=1 00:12:19.007 --rc geninfo_unexecuted_blocks=1 00:12:19.007 00:12:19.007 ' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.007 --rc genhtml_branch_coverage=1 00:12:19.007 --rc genhtml_function_coverage=1 00:12:19.007 --rc genhtml_legend=1 00:12:19.007 --rc geninfo_all_blocks=1 00:12:19.007 --rc geninfo_unexecuted_blocks=1 00:12:19.007 00:12:19.007 ' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.007 --rc genhtml_branch_coverage=1 00:12:19.007 --rc genhtml_function_coverage=1 00:12:19.007 --rc genhtml_legend=1 00:12:19.007 --rc geninfo_all_blocks=1 00:12:19.007 --rc geninfo_unexecuted_blocks=1 00:12:19.007 00:12:19.007 ' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.007 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.007 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.008 ************************************ 00:12:19.008 START TEST nvmf_auth_target 00:12:19.008 ************************************ 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:19.008 * Looking for test storage... 00:12:19.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.008 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:19.267 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.268 --rc genhtml_branch_coverage=1 00:12:19.268 --rc genhtml_function_coverage=1 00:12:19.268 --rc genhtml_legend=1 00:12:19.268 --rc geninfo_all_blocks=1 00:12:19.268 --rc geninfo_unexecuted_blocks=1 00:12:19.268 00:12:19.268 ' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.268 --rc genhtml_branch_coverage=1 00:12:19.268 --rc genhtml_function_coverage=1 00:12:19.268 --rc genhtml_legend=1 00:12:19.268 --rc geninfo_all_blocks=1 00:12:19.268 --rc geninfo_unexecuted_blocks=1 00:12:19.268 00:12:19.268 ' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.268 --rc genhtml_branch_coverage=1 00:12:19.268 --rc genhtml_function_coverage=1 00:12:19.268 --rc genhtml_legend=1 00:12:19.268 --rc geninfo_all_blocks=1 00:12:19.268 --rc geninfo_unexecuted_blocks=1 00:12:19.268 00:12:19.268 ' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.268 --rc genhtml_branch_coverage=1 00:12:19.268 --rc genhtml_function_coverage=1 00:12:19.268 --rc genhtml_legend=1 00:12:19.268 --rc geninfo_all_blocks=1 00:12:19.268 --rc geninfo_unexecuted_blocks=1 00:12:19.268 00:12:19.268 ' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.268 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:19.268 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:19.269 Cannot find device "nvmf_init_br" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:19.269 Cannot find device "nvmf_init_br2" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:19.269 Cannot find device "nvmf_tgt_br" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:19.269 Cannot find device "nvmf_tgt_br2" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:19.269 Cannot find device "nvmf_init_br" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:19.269 Cannot find device "nvmf_init_br2" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:19.269 Cannot find device "nvmf_tgt_br" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:19.269 Cannot find device "nvmf_tgt_br2" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:19.269 Cannot find device "nvmf_br" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:19.269 Cannot find device "nvmf_init_if" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:19.269 Cannot find device "nvmf_init_if2" 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:19.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:19.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:19.269 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:19.528 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.528 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:12:19.528 00:12:19.528 --- 10.0.0.3 ping statistics --- 00:12:19.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.528 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:19.528 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:19.528 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:12:19.528 00:12:19.528 --- 10.0.0.4 ping statistics --- 00:12:19.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.528 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:19.528 00:12:19.528 --- 10.0.0.1 ping statistics --- 00:12:19.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.528 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:19.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:12:19.528 00:12:19.528 --- 10.0.0.2 ping statistics --- 00:12:19.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.528 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.528 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67685 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67685 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67685 ']' 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.786 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.044 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.044 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:20.044 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.044 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.044 04:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67710 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=893c1abddbb99a24fe4b2e5e79160e02f7c5b854f2be70b0 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Ekf 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 893c1abddbb99a24fe4b2e5e79160e02f7c5b854f2be70b0 0 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 893c1abddbb99a24fe4b2e5e79160e02f7c5b854f2be70b0 0 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=893c1abddbb99a24fe4b2e5e79160e02f7c5b854f2be70b0 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Ekf 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Ekf 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Ekf 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=26360eeeacbb9238f20fd428c46f306ae18ede089ae73c3b65aea3fd2a729ce0 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0bi 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 26360eeeacbb9238f20fd428c46f306ae18ede089ae73c3b65aea3fd2a729ce0 3 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 26360eeeacbb9238f20fd428c46f306ae18ede089ae73c3b65aea3fd2a729ce0 3 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=26360eeeacbb9238f20fd428c46f306ae18ede089ae73c3b65aea3fd2a729ce0 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0bi 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0bi 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0bi 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.302 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8d9cbb44ae93088837ee504f5ed4073d 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hjh 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8d9cbb44ae93088837ee504f5ed4073d 1 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8d9cbb44ae93088837ee504f5ed4073d 1 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8d9cbb44ae93088837ee504f5ed4073d 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hjh 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hjh 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.hjh 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:20.303 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fb3c12514c89a0f02ff0f13781e72ba81830986f1b56b6b5 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aBV 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fb3c12514c89a0f02ff0f13781e72ba81830986f1b56b6b5 2 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fb3c12514c89a0f02ff0f13781e72ba81830986f1b56b6b5 2 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fb3c12514c89a0f02ff0f13781e72ba81830986f1b56b6b5 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aBV 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aBV 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.aBV 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:20.561 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d190eedfa8ef44828d2c425506552547ae2fa3c56e0f3eff 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pis 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d190eedfa8ef44828d2c425506552547ae2fa3c56e0f3eff 2 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d190eedfa8ef44828d2c425506552547ae2fa3c56e0f3eff 2 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d190eedfa8ef44828d2c425506552547ae2fa3c56e0f3eff 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pis 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pis 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Pis 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=45dd959dc92fbc7f395b97b6fb167299 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zgQ 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 45dd959dc92fbc7f395b97b6fb167299 1 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 45dd959dc92fbc7f395b97b6fb167299 1 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=45dd959dc92fbc7f395b97b6fb167299 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zgQ 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zgQ 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.zgQ 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2eed301abd9420fb950e9e3bc72faac816f3480ce03c58689ce33f3d82472b83 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wzh 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2eed301abd9420fb950e9e3bc72faac816f3480ce03c58689ce33f3d82472b83 3 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2eed301abd9420fb950e9e3bc72faac816f3480ce03c58689ce33f3d82472b83 3 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2eed301abd9420fb950e9e3bc72faac816f3480ce03c58689ce33f3d82472b83 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:20.562 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wzh 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wzh 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.wzh 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67685 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67685 ']' 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.826 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.124 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.125 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:21.125 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67710 /var/tmp/host.sock 00:12:21.125 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67710 ']' 00:12:21.125 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:21.125 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:21.125 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:21.125 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.125 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ekf 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Ekf 00:12:21.384 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Ekf 00:12:21.643 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0bi ]] 00:12:21.643 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0bi 00:12:21.643 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.643 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.643 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.643 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0bi 00:12:21.643 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0bi 00:12:21.903 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:21.903 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hjh 00:12:21.903 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.903 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.903 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.903 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.hjh 00:12:21.903 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.hjh 00:12:22.161 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.aBV ]] 00:12:22.161 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aBV 00:12:22.161 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.161 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.161 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.161 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aBV 00:12:22.161 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aBV 00:12:22.419 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:22.419 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Pis 00:12:22.419 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.419 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.419 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.419 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Pis 00:12:22.419 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Pis 00:12:22.693 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.zgQ ]] 00:12:22.693 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zgQ 00:12:22.693 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.693 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.693 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.693 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zgQ 00:12:22.693 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zgQ 00:12:22.951 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:22.951 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wzh 00:12:22.951 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.952 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.952 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.952 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.wzh 00:12:22.952 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.wzh 00:12:23.209 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:23.209 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:23.209 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.209 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.209 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:23.209 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.466 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.724 00:12:23.981 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.981 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.981 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.239 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.239 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.239 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.239 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.239 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.239 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.239 { 00:12:24.239 "cntlid": 1, 00:12:24.239 "qid": 0, 00:12:24.239 "state": "enabled", 00:12:24.239 "thread": "nvmf_tgt_poll_group_000", 00:12:24.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:24.239 "listen_address": { 00:12:24.239 "trtype": "TCP", 00:12:24.239 "adrfam": "IPv4", 00:12:24.239 "traddr": "10.0.0.3", 00:12:24.239 "trsvcid": "4420" 00:12:24.239 }, 00:12:24.239 "peer_address": { 00:12:24.239 "trtype": "TCP", 00:12:24.239 "adrfam": "IPv4", 00:12:24.239 "traddr": "10.0.0.1", 00:12:24.239 "trsvcid": "41650" 00:12:24.239 }, 00:12:24.239 "auth": { 00:12:24.239 "state": "completed", 00:12:24.239 "digest": "sha256", 00:12:24.239 "dhgroup": "null" 00:12:24.239 } 00:12:24.239 } 00:12:24.239 ]' 00:12:24.239 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.239 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.239 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.239 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:24.239 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.239 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.239 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.239 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.803 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:12:24.803 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:12:28.981 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.982 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:28.982 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.982 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.982 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.982 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.982 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:28.982 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.548 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.807 00:12:29.807 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.807 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.807 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.066 { 00:12:30.066 "cntlid": 3, 00:12:30.066 "qid": 0, 00:12:30.066 "state": "enabled", 00:12:30.066 "thread": "nvmf_tgt_poll_group_000", 00:12:30.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:30.066 "listen_address": { 00:12:30.066 "trtype": "TCP", 00:12:30.066 "adrfam": "IPv4", 00:12:30.066 "traddr": "10.0.0.3", 00:12:30.066 "trsvcid": "4420" 00:12:30.066 }, 00:12:30.066 "peer_address": { 00:12:30.066 "trtype": "TCP", 00:12:30.066 "adrfam": "IPv4", 00:12:30.066 "traddr": "10.0.0.1", 00:12:30.066 "trsvcid": "57770" 00:12:30.066 }, 00:12:30.066 "auth": { 00:12:30.066 "state": "completed", 00:12:30.066 "digest": "sha256", 00:12:30.066 "dhgroup": "null" 00:12:30.066 } 00:12:30.066 } 00:12:30.066 ]' 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.066 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.633 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:12:30.633 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:12:31.200 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.200 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:31.200 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.200 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.200 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.200 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.200 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:31.200 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.459 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.717 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.717 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.717 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.717 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.975 00:12:31.975 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.975 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.975 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.233 { 00:12:32.233 "cntlid": 5, 00:12:32.233 "qid": 0, 00:12:32.233 "state": "enabled", 00:12:32.233 "thread": "nvmf_tgt_poll_group_000", 00:12:32.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:32.233 "listen_address": { 00:12:32.233 "trtype": "TCP", 00:12:32.233 "adrfam": "IPv4", 00:12:32.233 "traddr": "10.0.0.3", 00:12:32.233 "trsvcid": "4420" 00:12:32.233 }, 00:12:32.233 "peer_address": { 00:12:32.233 "trtype": "TCP", 00:12:32.233 "adrfam": "IPv4", 00:12:32.233 "traddr": "10.0.0.1", 00:12:32.233 "trsvcid": "57784" 00:12:32.233 }, 00:12:32.233 "auth": { 00:12:32.233 "state": "completed", 00:12:32.233 "digest": "sha256", 00:12:32.233 "dhgroup": "null" 00:12:32.233 } 00:12:32.233 } 00:12:32.233 ]' 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.233 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.491 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:32.491 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.491 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.491 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.491 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.749 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:12:32.749 04:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:12:33.320 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.321 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:33.321 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.321 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.321 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.321 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.321 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:33.321 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.947 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.947 00:12:34.205 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.205 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.205 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.462 { 00:12:34.462 "cntlid": 7, 00:12:34.462 "qid": 0, 00:12:34.462 "state": "enabled", 00:12:34.462 "thread": "nvmf_tgt_poll_group_000", 00:12:34.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:34.462 "listen_address": { 00:12:34.462 "trtype": "TCP", 00:12:34.462 "adrfam": "IPv4", 00:12:34.462 "traddr": "10.0.0.3", 00:12:34.462 "trsvcid": "4420" 00:12:34.462 }, 00:12:34.462 "peer_address": { 00:12:34.462 "trtype": "TCP", 00:12:34.462 "adrfam": "IPv4", 00:12:34.462 "traddr": "10.0.0.1", 00:12:34.462 "trsvcid": "57818" 00:12:34.462 }, 00:12:34.462 "auth": { 00:12:34.462 "state": "completed", 00:12:34.462 "digest": "sha256", 00:12:34.462 "dhgroup": "null" 00:12:34.462 } 00:12:34.462 } 00:12:34.462 ]' 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.462 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.027 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:12:35.027 04:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.590 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.847 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.103 00:12:36.361 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.361 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.361 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.618 { 00:12:36.618 "cntlid": 9, 00:12:36.618 "qid": 0, 00:12:36.618 "state": "enabled", 00:12:36.618 "thread": "nvmf_tgt_poll_group_000", 00:12:36.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:36.618 "listen_address": { 00:12:36.618 "trtype": "TCP", 00:12:36.618 "adrfam": "IPv4", 00:12:36.618 "traddr": "10.0.0.3", 00:12:36.618 "trsvcid": "4420" 00:12:36.618 }, 00:12:36.618 "peer_address": { 00:12:36.618 "trtype": "TCP", 00:12:36.618 "adrfam": "IPv4", 00:12:36.618 "traddr": "10.0.0.1", 00:12:36.618 "trsvcid": "57842" 00:12:36.618 }, 00:12:36.618 "auth": { 00:12:36.618 "state": "completed", 00:12:36.618 "digest": "sha256", 00:12:36.618 "dhgroup": "ffdhe2048" 00:12:36.618 } 00:12:36.618 } 00:12:36.618 ]' 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.618 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.874 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:12:36.874 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.806 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.064 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.064 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.064 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.064 04:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.323 00:12:38.323 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.323 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.323 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.581 { 00:12:38.581 "cntlid": 11, 00:12:38.581 "qid": 0, 00:12:38.581 "state": "enabled", 00:12:38.581 "thread": "nvmf_tgt_poll_group_000", 00:12:38.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:38.581 "listen_address": { 00:12:38.581 "trtype": "TCP", 00:12:38.581 "adrfam": "IPv4", 00:12:38.581 "traddr": "10.0.0.3", 00:12:38.581 "trsvcid": "4420" 00:12:38.581 }, 00:12:38.581 "peer_address": { 00:12:38.581 "trtype": "TCP", 00:12:38.581 "adrfam": "IPv4", 00:12:38.581 "traddr": "10.0.0.1", 00:12:38.581 "trsvcid": "47722" 00:12:38.581 }, 00:12:38.581 "auth": { 00:12:38.581 "state": "completed", 00:12:38.581 "digest": "sha256", 00:12:38.581 "dhgroup": "ffdhe2048" 00:12:38.581 } 00:12:38.581 } 00:12:38.581 ]' 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.581 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.840 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:12:38.840 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:12:39.774 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.774 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:39.774 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.774 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.774 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.774 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.774 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:39.774 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.032 04:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.290 00:12:40.290 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.290 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.290 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.548 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.548 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.548 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.548 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.548 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.548 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.548 { 00:12:40.548 "cntlid": 13, 00:12:40.548 "qid": 0, 00:12:40.548 "state": "enabled", 00:12:40.548 "thread": "nvmf_tgt_poll_group_000", 00:12:40.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:40.548 "listen_address": { 00:12:40.548 "trtype": "TCP", 00:12:40.548 "adrfam": "IPv4", 00:12:40.548 "traddr": "10.0.0.3", 00:12:40.548 "trsvcid": "4420" 00:12:40.548 }, 00:12:40.548 "peer_address": { 00:12:40.548 "trtype": "TCP", 00:12:40.548 "adrfam": "IPv4", 00:12:40.548 "traddr": "10.0.0.1", 00:12:40.548 "trsvcid": "47754" 00:12:40.548 }, 00:12:40.548 "auth": { 00:12:40.548 "state": "completed", 00:12:40.548 "digest": "sha256", 00:12:40.548 "dhgroup": "ffdhe2048" 00:12:40.548 } 00:12:40.548 } 00:12:40.548 ]' 00:12:40.548 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.827 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:40.827 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.827 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.827 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.827 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.827 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.827 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.090 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:12:41.091 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:12:41.656 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.656 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:41.656 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.656 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.656 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.656 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.656 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:41.656 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.915 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.481 00:12:42.481 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.481 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.481 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.481 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.738 { 00:12:42.738 "cntlid": 15, 00:12:42.738 "qid": 0, 00:12:42.738 "state": "enabled", 00:12:42.738 "thread": "nvmf_tgt_poll_group_000", 00:12:42.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:42.738 "listen_address": { 00:12:42.738 "trtype": "TCP", 00:12:42.738 "adrfam": "IPv4", 00:12:42.738 "traddr": "10.0.0.3", 00:12:42.738 "trsvcid": "4420" 00:12:42.738 }, 00:12:42.738 "peer_address": { 00:12:42.738 "trtype": "TCP", 00:12:42.738 "adrfam": "IPv4", 00:12:42.738 "traddr": "10.0.0.1", 00:12:42.738 "trsvcid": "47782" 00:12:42.738 }, 00:12:42.738 "auth": { 00:12:42.738 "state": "completed", 00:12:42.738 "digest": "sha256", 00:12:42.738 "dhgroup": "ffdhe2048" 00:12:42.738 } 00:12:42.738 } 00:12:42.738 ]' 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.738 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.996 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:12:42.996 04:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:12:43.578 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.835 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:43.835 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.835 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.835 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.835 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.835 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.835 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:43.835 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.093 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.350 00:12:44.350 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.350 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.350 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.607 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.607 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.607 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.607 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.607 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.607 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.607 { 00:12:44.607 "cntlid": 17, 00:12:44.607 "qid": 0, 00:12:44.607 "state": "enabled", 00:12:44.607 "thread": "nvmf_tgt_poll_group_000", 00:12:44.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:44.608 "listen_address": { 00:12:44.608 "trtype": "TCP", 00:12:44.608 "adrfam": "IPv4", 00:12:44.608 "traddr": "10.0.0.3", 00:12:44.608 "trsvcid": "4420" 00:12:44.608 }, 00:12:44.608 "peer_address": { 00:12:44.608 "trtype": "TCP", 00:12:44.608 "adrfam": "IPv4", 00:12:44.608 "traddr": "10.0.0.1", 00:12:44.608 "trsvcid": "47804" 00:12:44.608 }, 00:12:44.608 "auth": { 00:12:44.608 "state": "completed", 00:12:44.608 "digest": "sha256", 00:12:44.608 "dhgroup": "ffdhe3072" 00:12:44.608 } 00:12:44.608 } 00:12:44.608 ]' 00:12:44.608 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.865 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:44.865 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.865 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:44.865 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.865 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.865 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.865 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.124 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:12:45.124 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:12:46.054 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.054 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:46.054 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.054 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.054 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.054 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.054 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.054 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.055 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.637 00:12:46.637 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.637 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.637 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.895 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.895 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.895 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.896 { 00:12:46.896 "cntlid": 19, 00:12:46.896 "qid": 0, 00:12:46.896 "state": "enabled", 00:12:46.896 "thread": "nvmf_tgt_poll_group_000", 00:12:46.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:46.896 "listen_address": { 00:12:46.896 "trtype": "TCP", 00:12:46.896 "adrfam": "IPv4", 00:12:46.896 "traddr": "10.0.0.3", 00:12:46.896 "trsvcid": "4420" 00:12:46.896 }, 00:12:46.896 "peer_address": { 00:12:46.896 "trtype": "TCP", 00:12:46.896 "adrfam": "IPv4", 00:12:46.896 "traddr": "10.0.0.1", 00:12:46.896 "trsvcid": "47824" 00:12:46.896 }, 00:12:46.896 "auth": { 00:12:46.896 "state": "completed", 00:12:46.896 "digest": "sha256", 00:12:46.896 "dhgroup": "ffdhe3072" 00:12:46.896 } 00:12:46.896 } 00:12:46.896 ]' 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.896 04:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.154 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:12:47.154 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:12:48.125 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.125 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:48.125 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.125 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.125 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.125 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.125 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:48.125 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.383 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.642 00:12:48.642 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.642 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.642 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.901 { 00:12:48.901 "cntlid": 21, 00:12:48.901 "qid": 0, 00:12:48.901 "state": "enabled", 00:12:48.901 "thread": "nvmf_tgt_poll_group_000", 00:12:48.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:48.901 "listen_address": { 00:12:48.901 "trtype": "TCP", 00:12:48.901 "adrfam": "IPv4", 00:12:48.901 "traddr": "10.0.0.3", 00:12:48.901 "trsvcid": "4420" 00:12:48.901 }, 00:12:48.901 "peer_address": { 00:12:48.901 "trtype": "TCP", 00:12:48.901 "adrfam": "IPv4", 00:12:48.901 "traddr": "10.0.0.1", 00:12:48.901 "trsvcid": "49630" 00:12:48.901 }, 00:12:48.901 "auth": { 00:12:48.901 "state": "completed", 00:12:48.901 "digest": "sha256", 00:12:48.901 "dhgroup": "ffdhe3072" 00:12:48.901 } 00:12:48.901 } 00:12:48.901 ]' 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.901 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.159 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:49.159 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.159 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.159 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.159 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.416 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:12:49.416 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:12:49.981 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.981 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:49.981 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.981 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.981 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.981 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.981 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:49.981 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.239 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.805 00:12:50.805 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.805 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.806 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.064 { 00:12:51.064 "cntlid": 23, 00:12:51.064 "qid": 0, 00:12:51.064 "state": "enabled", 00:12:51.064 "thread": "nvmf_tgt_poll_group_000", 00:12:51.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:51.064 "listen_address": { 00:12:51.064 "trtype": "TCP", 00:12:51.064 "adrfam": "IPv4", 00:12:51.064 "traddr": "10.0.0.3", 00:12:51.064 "trsvcid": "4420" 00:12:51.064 }, 00:12:51.064 "peer_address": { 00:12:51.064 "trtype": "TCP", 00:12:51.064 "adrfam": "IPv4", 00:12:51.064 "traddr": "10.0.0.1", 00:12:51.064 "trsvcid": "49644" 00:12:51.064 }, 00:12:51.064 "auth": { 00:12:51.064 "state": "completed", 00:12:51.064 "digest": "sha256", 00:12:51.064 "dhgroup": "ffdhe3072" 00:12:51.064 } 00:12:51.064 } 00:12:51.064 ]' 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.064 04:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.322 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:12:51.322 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:12:51.888 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.145 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:52.145 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.145 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.145 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.145 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.145 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.145 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:52.145 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.403 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.661 00:12:52.661 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.661 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.661 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.227 { 00:12:53.227 "cntlid": 25, 00:12:53.227 "qid": 0, 00:12:53.227 "state": "enabled", 00:12:53.227 "thread": "nvmf_tgt_poll_group_000", 00:12:53.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:53.227 "listen_address": { 00:12:53.227 "trtype": "TCP", 00:12:53.227 "adrfam": "IPv4", 00:12:53.227 "traddr": "10.0.0.3", 00:12:53.227 "trsvcid": "4420" 00:12:53.227 }, 00:12:53.227 "peer_address": { 00:12:53.227 "trtype": "TCP", 00:12:53.227 "adrfam": "IPv4", 00:12:53.227 "traddr": "10.0.0.1", 00:12:53.227 "trsvcid": "49676" 00:12:53.227 }, 00:12:53.227 "auth": { 00:12:53.227 "state": "completed", 00:12:53.227 "digest": "sha256", 00:12:53.227 "dhgroup": "ffdhe4096" 00:12:53.227 } 00:12:53.227 } 00:12:53.227 ]' 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:53.227 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.227 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.227 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.227 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.485 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:12:53.485 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:12:54.049 04:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.311 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:54.311 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.311 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.311 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.311 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.311 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:54.311 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.569 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.828 00:12:54.828 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.828 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.828 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.087 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.087 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.087 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.087 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.087 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.087 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.087 { 00:12:55.087 "cntlid": 27, 00:12:55.087 "qid": 0, 00:12:55.087 "state": "enabled", 00:12:55.087 "thread": "nvmf_tgt_poll_group_000", 00:12:55.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:55.087 "listen_address": { 00:12:55.087 "trtype": "TCP", 00:12:55.087 "adrfam": "IPv4", 00:12:55.087 "traddr": "10.0.0.3", 00:12:55.087 "trsvcid": "4420" 00:12:55.087 }, 00:12:55.087 "peer_address": { 00:12:55.087 "trtype": "TCP", 00:12:55.087 "adrfam": "IPv4", 00:12:55.087 "traddr": "10.0.0.1", 00:12:55.087 "trsvcid": "49702" 00:12:55.087 }, 00:12:55.087 "auth": { 00:12:55.087 "state": "completed", 00:12:55.087 "digest": "sha256", 00:12:55.087 "dhgroup": "ffdhe4096" 00:12:55.087 } 00:12:55.087 } 00:12:55.087 ]' 00:12:55.087 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.087 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.087 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.346 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:55.346 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.346 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.346 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.346 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.604 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:12:55.604 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:12:56.172 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.172 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:56.172 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.172 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.172 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.172 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:56.172 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.431 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.432 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.432 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.999 00:12:56.999 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.999 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.999 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.999 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.999 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.999 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.999 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.999 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.258 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.258 { 00:12:57.258 "cntlid": 29, 00:12:57.258 "qid": 0, 00:12:57.258 "state": "enabled", 00:12:57.258 "thread": "nvmf_tgt_poll_group_000", 00:12:57.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:57.258 "listen_address": { 00:12:57.258 "trtype": "TCP", 00:12:57.258 "adrfam": "IPv4", 00:12:57.258 "traddr": "10.0.0.3", 00:12:57.258 "trsvcid": "4420" 00:12:57.258 }, 00:12:57.258 "peer_address": { 00:12:57.258 "trtype": "TCP", 00:12:57.258 "adrfam": "IPv4", 00:12:57.258 "traddr": "10.0.0.1", 00:12:57.258 "trsvcid": "54732" 00:12:57.258 }, 00:12:57.258 "auth": { 00:12:57.258 "state": "completed", 00:12:57.258 "digest": "sha256", 00:12:57.258 "dhgroup": "ffdhe4096" 00:12:57.258 } 00:12:57.258 } 00:12:57.258 ]' 00:12:57.258 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.258 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.258 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.258 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:57.258 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.258 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.258 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.258 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.517 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:12:57.517 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:12:58.452 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.452 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:12:58.452 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.452 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.452 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.452 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.452 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.453 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.018 00:12:59.018 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.018 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.018 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.306 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.306 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.306 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.306 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.306 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.306 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.307 { 00:12:59.307 "cntlid": 31, 00:12:59.307 "qid": 0, 00:12:59.307 "state": "enabled", 00:12:59.307 "thread": "nvmf_tgt_poll_group_000", 00:12:59.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:12:59.307 "listen_address": { 00:12:59.307 "trtype": "TCP", 00:12:59.307 "adrfam": "IPv4", 00:12:59.307 "traddr": "10.0.0.3", 00:12:59.307 "trsvcid": "4420" 00:12:59.307 }, 00:12:59.307 "peer_address": { 00:12:59.307 "trtype": "TCP", 00:12:59.307 "adrfam": "IPv4", 00:12:59.307 "traddr": "10.0.0.1", 00:12:59.307 "trsvcid": "54752" 00:12:59.307 }, 00:12:59.307 "auth": { 00:12:59.307 "state": "completed", 00:12:59.307 "digest": "sha256", 00:12:59.307 "dhgroup": "ffdhe4096" 00:12:59.307 } 00:12:59.307 } 00:12:59.307 ]' 00:12:59.307 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.307 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.307 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.307 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:59.307 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.307 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.307 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.307 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.906 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:12:59.906 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:00.496 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.753 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.318 00:13:01.318 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.318 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.318 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.577 { 00:13:01.577 "cntlid": 33, 00:13:01.577 "qid": 0, 00:13:01.577 "state": "enabled", 00:13:01.577 "thread": "nvmf_tgt_poll_group_000", 00:13:01.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:01.577 "listen_address": { 00:13:01.577 "trtype": "TCP", 00:13:01.577 "adrfam": "IPv4", 00:13:01.577 "traddr": "10.0.0.3", 00:13:01.577 "trsvcid": "4420" 00:13:01.577 }, 00:13:01.577 "peer_address": { 00:13:01.577 "trtype": "TCP", 00:13:01.577 "adrfam": "IPv4", 00:13:01.577 "traddr": "10.0.0.1", 00:13:01.577 "trsvcid": "54776" 00:13:01.577 }, 00:13:01.577 "auth": { 00:13:01.577 "state": "completed", 00:13:01.577 "digest": "sha256", 00:13:01.577 "dhgroup": "ffdhe6144" 00:13:01.577 } 00:13:01.577 } 00:13:01.577 ]' 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:01.577 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.836 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.836 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.836 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.836 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:01.836 04:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:02.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:02.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:02.772 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.031 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.295 00:13:03.295 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.295 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.295 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.555 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.555 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.555 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.555 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.555 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.555 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.555 { 00:13:03.555 "cntlid": 35, 00:13:03.555 "qid": 0, 00:13:03.555 "state": "enabled", 00:13:03.555 "thread": "nvmf_tgt_poll_group_000", 00:13:03.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:03.555 "listen_address": { 00:13:03.555 "trtype": "TCP", 00:13:03.555 "adrfam": "IPv4", 00:13:03.555 "traddr": "10.0.0.3", 00:13:03.555 "trsvcid": "4420" 00:13:03.555 }, 00:13:03.555 "peer_address": { 00:13:03.555 "trtype": "TCP", 00:13:03.555 "adrfam": "IPv4", 00:13:03.555 "traddr": "10.0.0.1", 00:13:03.555 "trsvcid": "54816" 00:13:03.555 }, 00:13:03.555 "auth": { 00:13:03.555 "state": "completed", 00:13:03.555 "digest": "sha256", 00:13:03.555 "dhgroup": "ffdhe6144" 00:13:03.555 } 00:13:03.555 } 00:13:03.555 ]' 00:13:03.555 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.837 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.837 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.837 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:03.837 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.837 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.837 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.837 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.095 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:04.095 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:04.663 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.663 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:04.663 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.663 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.663 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.663 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.663 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:04.663 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.922 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.490 00:13:05.490 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.490 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.490 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.749 { 00:13:05.749 "cntlid": 37, 00:13:05.749 "qid": 0, 00:13:05.749 "state": "enabled", 00:13:05.749 "thread": "nvmf_tgt_poll_group_000", 00:13:05.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:05.749 "listen_address": { 00:13:05.749 "trtype": "TCP", 00:13:05.749 "adrfam": "IPv4", 00:13:05.749 "traddr": "10.0.0.3", 00:13:05.749 "trsvcid": "4420" 00:13:05.749 }, 00:13:05.749 "peer_address": { 00:13:05.749 "trtype": "TCP", 00:13:05.749 "adrfam": "IPv4", 00:13:05.749 "traddr": "10.0.0.1", 00:13:05.749 "trsvcid": "54838" 00:13:05.749 }, 00:13:05.749 "auth": { 00:13:05.749 "state": "completed", 00:13:05.749 "digest": "sha256", 00:13:05.749 "dhgroup": "ffdhe6144" 00:13:05.749 } 00:13:05.749 } 00:13:05.749 ]' 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.749 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.144 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:06.144 04:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:06.711 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.711 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:06.711 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.711 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.711 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.711 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.711 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:06.711 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.278 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.535 00:13:07.535 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.535 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.535 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.793 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.793 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.793 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.793 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.793 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.793 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.793 { 00:13:07.793 "cntlid": 39, 00:13:07.793 "qid": 0, 00:13:07.793 "state": "enabled", 00:13:07.793 "thread": "nvmf_tgt_poll_group_000", 00:13:07.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:07.793 "listen_address": { 00:13:07.794 "trtype": "TCP", 00:13:07.794 "adrfam": "IPv4", 00:13:07.794 "traddr": "10.0.0.3", 00:13:07.794 "trsvcid": "4420" 00:13:07.794 }, 00:13:07.794 "peer_address": { 00:13:07.794 "trtype": "TCP", 00:13:07.794 "adrfam": "IPv4", 00:13:07.794 "traddr": "10.0.0.1", 00:13:07.794 "trsvcid": "46948" 00:13:07.794 }, 00:13:07.794 "auth": { 00:13:07.794 "state": "completed", 00:13:07.794 "digest": "sha256", 00:13:07.794 "dhgroup": "ffdhe6144" 00:13:07.794 } 00:13:07.794 } 00:13:07.794 ]' 00:13:07.794 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.057 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:08.057 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.057 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.057 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.057 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.057 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.057 04:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.315 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:08.315 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:09.248 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.248 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.814 00:13:10.072 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.072 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.072 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.331 { 00:13:10.331 "cntlid": 41, 00:13:10.331 "qid": 0, 00:13:10.331 "state": "enabled", 00:13:10.331 "thread": "nvmf_tgt_poll_group_000", 00:13:10.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:10.331 "listen_address": { 00:13:10.331 "trtype": "TCP", 00:13:10.331 "adrfam": "IPv4", 00:13:10.331 "traddr": "10.0.0.3", 00:13:10.331 "trsvcid": "4420" 00:13:10.331 }, 00:13:10.331 "peer_address": { 00:13:10.331 "trtype": "TCP", 00:13:10.331 "adrfam": "IPv4", 00:13:10.331 "traddr": "10.0.0.1", 00:13:10.331 "trsvcid": "46970" 00:13:10.331 }, 00:13:10.331 "auth": { 00:13:10.331 "state": "completed", 00:13:10.331 "digest": "sha256", 00:13:10.331 "dhgroup": "ffdhe8192" 00:13:10.331 } 00:13:10.331 } 00:13:10.331 ]' 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.331 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.588 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:10.588 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.523 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.782 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.782 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.782 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.782 04:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.350 00:13:12.350 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.350 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.350 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.608 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.608 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.608 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.608 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.608 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.608 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.608 { 00:13:12.608 "cntlid": 43, 00:13:12.608 "qid": 0, 00:13:12.608 "state": "enabled", 00:13:12.608 "thread": "nvmf_tgt_poll_group_000", 00:13:12.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:12.608 "listen_address": { 00:13:12.608 "trtype": "TCP", 00:13:12.608 "adrfam": "IPv4", 00:13:12.608 "traddr": "10.0.0.3", 00:13:12.608 "trsvcid": "4420" 00:13:12.608 }, 00:13:12.608 "peer_address": { 00:13:12.608 "trtype": "TCP", 00:13:12.608 "adrfam": "IPv4", 00:13:12.608 "traddr": "10.0.0.1", 00:13:12.608 "trsvcid": "47000" 00:13:12.608 }, 00:13:12.608 "auth": { 00:13:12.608 "state": "completed", 00:13:12.608 "digest": "sha256", 00:13:12.608 "dhgroup": "ffdhe8192" 00:13:12.608 } 00:13:12.608 } 00:13:12.608 ]' 00:13:12.608 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.609 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.609 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.609 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:12.609 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.867 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.867 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.867 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.154 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:13.154 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:13.719 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.719 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:13.719 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.719 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.719 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.719 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.719 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:13.719 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.978 04:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.915 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.915 { 00:13:14.915 "cntlid": 45, 00:13:14.915 "qid": 0, 00:13:14.915 "state": "enabled", 00:13:14.915 "thread": "nvmf_tgt_poll_group_000", 00:13:14.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:14.915 "listen_address": { 00:13:14.915 "trtype": "TCP", 00:13:14.915 "adrfam": "IPv4", 00:13:14.915 "traddr": "10.0.0.3", 00:13:14.915 "trsvcid": "4420" 00:13:14.915 }, 00:13:14.915 "peer_address": { 00:13:14.915 "trtype": "TCP", 00:13:14.915 "adrfam": "IPv4", 00:13:14.915 "traddr": "10.0.0.1", 00:13:14.915 "trsvcid": "47036" 00:13:14.915 }, 00:13:14.915 "auth": { 00:13:14.915 "state": "completed", 00:13:14.915 "digest": "sha256", 00:13:14.915 "dhgroup": "ffdhe8192" 00:13:14.915 } 00:13:14.915 } 00:13:14.915 ]' 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.915 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.174 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.174 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.174 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.174 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.174 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.432 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:15.432 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:16.366 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.366 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:16.366 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.366 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.366 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.366 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.366 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:16.366 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.366 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.299 00:13:17.299 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.299 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.299 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.299 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.569 { 00:13:17.569 "cntlid": 47, 00:13:17.569 "qid": 0, 00:13:17.569 "state": "enabled", 00:13:17.569 "thread": "nvmf_tgt_poll_group_000", 00:13:17.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:17.569 "listen_address": { 00:13:17.569 "trtype": "TCP", 00:13:17.569 "adrfam": "IPv4", 00:13:17.569 "traddr": "10.0.0.3", 00:13:17.569 "trsvcid": "4420" 00:13:17.569 }, 00:13:17.569 "peer_address": { 00:13:17.569 "trtype": "TCP", 00:13:17.569 "adrfam": "IPv4", 00:13:17.569 "traddr": "10.0.0.1", 00:13:17.569 "trsvcid": "44672" 00:13:17.569 }, 00:13:17.569 "auth": { 00:13:17.569 "state": "completed", 00:13:17.569 "digest": "sha256", 00:13:17.569 "dhgroup": "ffdhe8192" 00:13:17.569 } 00:13:17.569 } 00:13:17.569 ]' 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.569 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.828 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:17.828 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:18.768 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.768 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:18.768 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.768 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.769 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.334 00:13:19.334 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.334 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.334 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.592 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.592 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.592 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.592 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.592 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.592 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.592 { 00:13:19.592 "cntlid": 49, 00:13:19.592 "qid": 0, 00:13:19.592 "state": "enabled", 00:13:19.592 "thread": "nvmf_tgt_poll_group_000", 00:13:19.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:19.593 "listen_address": { 00:13:19.593 "trtype": "TCP", 00:13:19.593 "adrfam": "IPv4", 00:13:19.593 "traddr": "10.0.0.3", 00:13:19.593 "trsvcid": "4420" 00:13:19.593 }, 00:13:19.593 "peer_address": { 00:13:19.593 "trtype": "TCP", 00:13:19.593 "adrfam": "IPv4", 00:13:19.593 "traddr": "10.0.0.1", 00:13:19.593 "trsvcid": "44712" 00:13:19.593 }, 00:13:19.593 "auth": { 00:13:19.593 "state": "completed", 00:13:19.593 "digest": "sha384", 00:13:19.593 "dhgroup": "null" 00:13:19.593 } 00:13:19.593 } 00:13:19.593 ]' 00:13:19.593 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.593 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.593 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.593 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:19.593 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.593 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.593 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.593 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.852 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:19.852 04:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:20.423 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.423 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:20.423 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.423 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.681 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.681 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.681 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:20.681 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.939 04:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.198 00:13:21.198 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.198 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.198 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.457 { 00:13:21.457 "cntlid": 51, 00:13:21.457 "qid": 0, 00:13:21.457 "state": "enabled", 00:13:21.457 "thread": "nvmf_tgt_poll_group_000", 00:13:21.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:21.457 "listen_address": { 00:13:21.457 "trtype": "TCP", 00:13:21.457 "adrfam": "IPv4", 00:13:21.457 "traddr": "10.0.0.3", 00:13:21.457 "trsvcid": "4420" 00:13:21.457 }, 00:13:21.457 "peer_address": { 00:13:21.457 "trtype": "TCP", 00:13:21.457 "adrfam": "IPv4", 00:13:21.457 "traddr": "10.0.0.1", 00:13:21.457 "trsvcid": "44750" 00:13:21.457 }, 00:13:21.457 "auth": { 00:13:21.457 "state": "completed", 00:13:21.457 "digest": "sha384", 00:13:21.457 "dhgroup": "null" 00:13:21.457 } 00:13:21.457 } 00:13:21.457 ]' 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:21.457 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.715 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.715 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.715 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.973 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:21.973 04:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:22.539 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.539 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:22.539 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.539 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.539 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.539 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.539 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:22.539 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.105 04:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.362 00:13:23.362 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.362 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.362 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.619 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.620 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.620 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.620 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.620 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.620 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.620 { 00:13:23.620 "cntlid": 53, 00:13:23.620 "qid": 0, 00:13:23.620 "state": "enabled", 00:13:23.620 "thread": "nvmf_tgt_poll_group_000", 00:13:23.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:23.620 "listen_address": { 00:13:23.620 "trtype": "TCP", 00:13:23.620 "adrfam": "IPv4", 00:13:23.620 "traddr": "10.0.0.3", 00:13:23.620 "trsvcid": "4420" 00:13:23.620 }, 00:13:23.620 "peer_address": { 00:13:23.620 "trtype": "TCP", 00:13:23.620 "adrfam": "IPv4", 00:13:23.620 "traddr": "10.0.0.1", 00:13:23.620 "trsvcid": "44770" 00:13:23.620 }, 00:13:23.620 "auth": { 00:13:23.620 "state": "completed", 00:13:23.620 "digest": "sha384", 00:13:23.620 "dhgroup": "null" 00:13:23.620 } 00:13:23.620 } 00:13:23.620 ]' 00:13:23.620 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.620 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.620 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.876 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:23.876 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.877 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.877 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.877 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.133 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:24.133 04:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:24.697 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.697 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:24.697 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.697 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.697 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.697 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.697 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:24.697 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.261 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.519 00:13:25.519 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.519 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.519 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.776 { 00:13:25.776 "cntlid": 55, 00:13:25.776 "qid": 0, 00:13:25.776 "state": "enabled", 00:13:25.776 "thread": "nvmf_tgt_poll_group_000", 00:13:25.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:25.776 "listen_address": { 00:13:25.776 "trtype": "TCP", 00:13:25.776 "adrfam": "IPv4", 00:13:25.776 "traddr": "10.0.0.3", 00:13:25.776 "trsvcid": "4420" 00:13:25.776 }, 00:13:25.776 "peer_address": { 00:13:25.776 "trtype": "TCP", 00:13:25.776 "adrfam": "IPv4", 00:13:25.776 "traddr": "10.0.0.1", 00:13:25.776 "trsvcid": "44802" 00:13:25.776 }, 00:13:25.776 "auth": { 00:13:25.776 "state": "completed", 00:13:25.776 "digest": "sha384", 00:13:25.776 "dhgroup": "null" 00:13:25.776 } 00:13:25.776 } 00:13:25.776 ]' 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.776 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.034 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:26.034 04:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:26.981 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.240 04:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.500 00:13:27.500 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.500 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.500 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.758 { 00:13:27.758 "cntlid": 57, 00:13:27.758 "qid": 0, 00:13:27.758 "state": "enabled", 00:13:27.758 "thread": "nvmf_tgt_poll_group_000", 00:13:27.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:27.758 "listen_address": { 00:13:27.758 "trtype": "TCP", 00:13:27.758 "adrfam": "IPv4", 00:13:27.758 "traddr": "10.0.0.3", 00:13:27.758 "trsvcid": "4420" 00:13:27.758 }, 00:13:27.758 "peer_address": { 00:13:27.758 "trtype": "TCP", 00:13:27.758 "adrfam": "IPv4", 00:13:27.758 "traddr": "10.0.0.1", 00:13:27.758 "trsvcid": "58050" 00:13:27.758 }, 00:13:27.758 "auth": { 00:13:27.758 "state": "completed", 00:13:27.758 "digest": "sha384", 00:13:27.758 "dhgroup": "ffdhe2048" 00:13:27.758 } 00:13:27.758 } 00:13:27.758 ]' 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.758 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.016 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.016 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.016 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.016 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.016 04:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.274 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:28.274 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:28.841 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.841 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:28.841 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.841 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.841 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.841 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.841 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:28.841 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.100 04:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.358 00:13:29.358 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.358 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.358 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.925 { 00:13:29.925 "cntlid": 59, 00:13:29.925 "qid": 0, 00:13:29.925 "state": "enabled", 00:13:29.925 "thread": "nvmf_tgt_poll_group_000", 00:13:29.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:29.925 "listen_address": { 00:13:29.925 "trtype": "TCP", 00:13:29.925 "adrfam": "IPv4", 00:13:29.925 "traddr": "10.0.0.3", 00:13:29.925 "trsvcid": "4420" 00:13:29.925 }, 00:13:29.925 "peer_address": { 00:13:29.925 "trtype": "TCP", 00:13:29.925 "adrfam": "IPv4", 00:13:29.925 "traddr": "10.0.0.1", 00:13:29.925 "trsvcid": "58076" 00:13:29.925 }, 00:13:29.925 "auth": { 00:13:29.925 "state": "completed", 00:13:29.925 "digest": "sha384", 00:13:29.925 "dhgroup": "ffdhe2048" 00:13:29.925 } 00:13:29.925 } 00:13:29.925 ]' 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.925 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.183 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:30.183 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:30.750 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.750 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:30.750 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.750 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.750 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.750 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.750 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:30.750 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.009 04:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.576 00:13:31.576 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.576 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.576 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.834 { 00:13:31.834 "cntlid": 61, 00:13:31.834 "qid": 0, 00:13:31.834 "state": "enabled", 00:13:31.834 "thread": "nvmf_tgt_poll_group_000", 00:13:31.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:31.834 "listen_address": { 00:13:31.834 "trtype": "TCP", 00:13:31.834 "adrfam": "IPv4", 00:13:31.834 "traddr": "10.0.0.3", 00:13:31.834 "trsvcid": "4420" 00:13:31.834 }, 00:13:31.834 "peer_address": { 00:13:31.834 "trtype": "TCP", 00:13:31.834 "adrfam": "IPv4", 00:13:31.834 "traddr": "10.0.0.1", 00:13:31.834 "trsvcid": "58122" 00:13:31.834 }, 00:13:31.834 "auth": { 00:13:31.834 "state": "completed", 00:13:31.834 "digest": "sha384", 00:13:31.834 "dhgroup": "ffdhe2048" 00:13:31.834 } 00:13:31.834 } 00:13:31.834 ]' 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.834 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.093 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.093 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.093 04:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.368 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:32.368 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:32.933 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.933 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:32.933 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.933 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.934 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.934 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.934 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:32.934 04:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.191 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:33.192 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.192 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.756 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.756 { 00:13:33.756 "cntlid": 63, 00:13:33.756 "qid": 0, 00:13:33.756 "state": "enabled", 00:13:33.756 "thread": "nvmf_tgt_poll_group_000", 00:13:33.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:33.756 "listen_address": { 00:13:33.756 "trtype": "TCP", 00:13:33.756 "adrfam": "IPv4", 00:13:33.756 "traddr": "10.0.0.3", 00:13:33.756 "trsvcid": "4420" 00:13:33.756 }, 00:13:33.756 "peer_address": { 00:13:33.756 "trtype": "TCP", 00:13:33.756 "adrfam": "IPv4", 00:13:33.756 "traddr": "10.0.0.1", 00:13:33.756 "trsvcid": "58136" 00:13:33.756 }, 00:13:33.756 "auth": { 00:13:33.756 "state": "completed", 00:13:33.756 "digest": "sha384", 00:13:33.756 "dhgroup": "ffdhe2048" 00:13:33.756 } 00:13:33.756 } 00:13:33.756 ]' 00:13:33.756 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.013 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.013 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.013 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:34.013 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.013 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.013 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.013 04:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.270 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:34.270 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:34.835 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.400 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.658 00:13:35.658 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.658 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.658 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.915 { 00:13:35.915 "cntlid": 65, 00:13:35.915 "qid": 0, 00:13:35.915 "state": "enabled", 00:13:35.915 "thread": "nvmf_tgt_poll_group_000", 00:13:35.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:35.915 "listen_address": { 00:13:35.915 "trtype": "TCP", 00:13:35.915 "adrfam": "IPv4", 00:13:35.915 "traddr": "10.0.0.3", 00:13:35.915 "trsvcid": "4420" 00:13:35.915 }, 00:13:35.915 "peer_address": { 00:13:35.915 "trtype": "TCP", 00:13:35.915 "adrfam": "IPv4", 00:13:35.915 "traddr": "10.0.0.1", 00:13:35.915 "trsvcid": "58168" 00:13:35.915 }, 00:13:35.915 "auth": { 00:13:35.915 "state": "completed", 00:13:35.915 "digest": "sha384", 00:13:35.915 "dhgroup": "ffdhe3072" 00:13:35.915 } 00:13:35.915 } 00:13:35.915 ]' 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:35.915 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.173 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.173 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.173 04:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.430 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:36.430 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:36.996 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.996 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:36.996 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.996 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.996 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.996 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.996 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:36.996 04:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.254 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.822 00:13:37.822 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.822 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.822 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.112 { 00:13:38.112 "cntlid": 67, 00:13:38.112 "qid": 0, 00:13:38.112 "state": "enabled", 00:13:38.112 "thread": "nvmf_tgt_poll_group_000", 00:13:38.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:38.112 "listen_address": { 00:13:38.112 "trtype": "TCP", 00:13:38.112 "adrfam": "IPv4", 00:13:38.112 "traddr": "10.0.0.3", 00:13:38.112 "trsvcid": "4420" 00:13:38.112 }, 00:13:38.112 "peer_address": { 00:13:38.112 "trtype": "TCP", 00:13:38.112 "adrfam": "IPv4", 00:13:38.112 "traddr": "10.0.0.1", 00:13:38.112 "trsvcid": "43854" 00:13:38.112 }, 00:13:38.112 "auth": { 00:13:38.112 "state": "completed", 00:13:38.112 "digest": "sha384", 00:13:38.112 "dhgroup": "ffdhe3072" 00:13:38.112 } 00:13:38.112 } 00:13:38.112 ]' 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.112 04:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.386 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:38.386 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:38.954 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.954 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:38.954 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.954 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.954 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.954 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.954 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:38.954 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.213 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.473 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.473 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.473 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.473 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.732 00:13:39.732 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.732 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.732 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.990 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.990 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.990 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.990 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.990 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.990 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.990 { 00:13:39.990 "cntlid": 69, 00:13:39.990 "qid": 0, 00:13:39.990 "state": "enabled", 00:13:39.990 "thread": "nvmf_tgt_poll_group_000", 00:13:39.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:39.990 "listen_address": { 00:13:39.990 "trtype": "TCP", 00:13:39.990 "adrfam": "IPv4", 00:13:39.990 "traddr": "10.0.0.3", 00:13:39.990 "trsvcid": "4420" 00:13:39.990 }, 00:13:39.990 "peer_address": { 00:13:39.990 "trtype": "TCP", 00:13:39.990 "adrfam": "IPv4", 00:13:39.990 "traddr": "10.0.0.1", 00:13:39.990 "trsvcid": "43880" 00:13:39.990 }, 00:13:39.990 "auth": { 00:13:39.990 "state": "completed", 00:13:39.990 "digest": "sha384", 00:13:39.991 "dhgroup": "ffdhe3072" 00:13:39.991 } 00:13:39.991 } 00:13:39.991 ]' 00:13:39.991 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.991 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.991 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.991 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.991 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.249 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.249 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.249 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.508 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:40.508 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:41.077 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.077 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:41.077 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.077 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.077 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.077 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.077 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:41.077 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:41.334 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.335 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.900 00:13:41.900 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.900 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.900 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.158 { 00:13:42.158 "cntlid": 71, 00:13:42.158 "qid": 0, 00:13:42.158 "state": "enabled", 00:13:42.158 "thread": "nvmf_tgt_poll_group_000", 00:13:42.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:42.158 "listen_address": { 00:13:42.158 "trtype": "TCP", 00:13:42.158 "adrfam": "IPv4", 00:13:42.158 "traddr": "10.0.0.3", 00:13:42.158 "trsvcid": "4420" 00:13:42.158 }, 00:13:42.158 "peer_address": { 00:13:42.158 "trtype": "TCP", 00:13:42.158 "adrfam": "IPv4", 00:13:42.158 "traddr": "10.0.0.1", 00:13:42.158 "trsvcid": "43922" 00:13:42.158 }, 00:13:42.158 "auth": { 00:13:42.158 "state": "completed", 00:13:42.158 "digest": "sha384", 00:13:42.158 "dhgroup": "ffdhe3072" 00:13:42.158 } 00:13:42.158 } 00:13:42.158 ]' 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.158 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.158 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:42.158 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.158 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.158 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.158 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.721 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:42.721 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:43.286 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.286 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:43.286 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.286 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.286 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.286 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:43.286 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.286 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:43.287 04:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.560 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.876 00:13:43.876 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.876 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.876 04:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.134 { 00:13:44.134 "cntlid": 73, 00:13:44.134 "qid": 0, 00:13:44.134 "state": "enabled", 00:13:44.134 "thread": "nvmf_tgt_poll_group_000", 00:13:44.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:44.134 "listen_address": { 00:13:44.134 "trtype": "TCP", 00:13:44.134 "adrfam": "IPv4", 00:13:44.134 "traddr": "10.0.0.3", 00:13:44.134 "trsvcid": "4420" 00:13:44.134 }, 00:13:44.134 "peer_address": { 00:13:44.134 "trtype": "TCP", 00:13:44.134 "adrfam": "IPv4", 00:13:44.134 "traddr": "10.0.0.1", 00:13:44.134 "trsvcid": "43938" 00:13:44.134 }, 00:13:44.134 "auth": { 00:13:44.134 "state": "completed", 00:13:44.134 "digest": "sha384", 00:13:44.134 "dhgroup": "ffdhe4096" 00:13:44.134 } 00:13:44.134 } 00:13:44.134 ]' 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.134 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.392 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:44.392 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.392 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.392 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.392 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.650 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:44.650 04:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:45.215 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.215 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:45.215 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.215 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.215 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.215 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.215 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.215 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.473 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.039 00:13:46.039 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.039 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.039 04:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.296 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.296 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.296 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.296 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.296 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.296 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.296 { 00:13:46.296 "cntlid": 75, 00:13:46.296 "qid": 0, 00:13:46.296 "state": "enabled", 00:13:46.296 "thread": "nvmf_tgt_poll_group_000", 00:13:46.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:46.296 "listen_address": { 00:13:46.296 "trtype": "TCP", 00:13:46.296 "adrfam": "IPv4", 00:13:46.296 "traddr": "10.0.0.3", 00:13:46.296 "trsvcid": "4420" 00:13:46.296 }, 00:13:46.296 "peer_address": { 00:13:46.296 "trtype": "TCP", 00:13:46.296 "adrfam": "IPv4", 00:13:46.296 "traddr": "10.0.0.1", 00:13:46.296 "trsvcid": "43970" 00:13:46.296 }, 00:13:46.296 "auth": { 00:13:46.296 "state": "completed", 00:13:46.296 "digest": "sha384", 00:13:46.296 "dhgroup": "ffdhe4096" 00:13:46.296 } 00:13:46.296 } 00:13:46.296 ]' 00:13:46.296 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.296 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.297 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.297 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.297 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.297 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.297 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.297 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.554 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:46.554 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.487 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.053 00:13:48.053 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.053 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.053 04:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.312 { 00:13:48.312 "cntlid": 77, 00:13:48.312 "qid": 0, 00:13:48.312 "state": "enabled", 00:13:48.312 "thread": "nvmf_tgt_poll_group_000", 00:13:48.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:48.312 "listen_address": { 00:13:48.312 "trtype": "TCP", 00:13:48.312 "adrfam": "IPv4", 00:13:48.312 "traddr": "10.0.0.3", 00:13:48.312 "trsvcid": "4420" 00:13:48.312 }, 00:13:48.312 "peer_address": { 00:13:48.312 "trtype": "TCP", 00:13:48.312 "adrfam": "IPv4", 00:13:48.312 "traddr": "10.0.0.1", 00:13:48.312 "trsvcid": "43920" 00:13:48.312 }, 00:13:48.312 "auth": { 00:13:48.312 "state": "completed", 00:13:48.312 "digest": "sha384", 00:13:48.312 "dhgroup": "ffdhe4096" 00:13:48.312 } 00:13:48.312 } 00:13:48.312 ]' 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.312 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.571 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:48.571 04:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:49.505 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.505 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:49.505 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.505 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.505 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.505 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.505 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:49.505 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.764 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:50.024 00:13:50.024 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.024 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.024 04:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.592 { 00:13:50.592 "cntlid": 79, 00:13:50.592 "qid": 0, 00:13:50.592 "state": "enabled", 00:13:50.592 "thread": "nvmf_tgt_poll_group_000", 00:13:50.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:50.592 "listen_address": { 00:13:50.592 "trtype": "TCP", 00:13:50.592 "adrfam": "IPv4", 00:13:50.592 "traddr": "10.0.0.3", 00:13:50.592 "trsvcid": "4420" 00:13:50.592 }, 00:13:50.592 "peer_address": { 00:13:50.592 "trtype": "TCP", 00:13:50.592 "adrfam": "IPv4", 00:13:50.592 "traddr": "10.0.0.1", 00:13:50.592 "trsvcid": "43952" 00:13:50.592 }, 00:13:50.592 "auth": { 00:13:50.592 "state": "completed", 00:13:50.592 "digest": "sha384", 00:13:50.592 "dhgroup": "ffdhe4096" 00:13:50.592 } 00:13:50.592 } 00:13:50.592 ]' 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.592 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.850 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:50.850 04:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:51.783 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.040 04:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.298 00:13:52.555 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.555 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.555 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.812 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.812 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.812 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.812 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.812 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.812 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.812 { 00:13:52.813 "cntlid": 81, 00:13:52.813 "qid": 0, 00:13:52.813 "state": "enabled", 00:13:52.813 "thread": "nvmf_tgt_poll_group_000", 00:13:52.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:52.813 "listen_address": { 00:13:52.813 "trtype": "TCP", 00:13:52.813 "adrfam": "IPv4", 00:13:52.813 "traddr": "10.0.0.3", 00:13:52.813 "trsvcid": "4420" 00:13:52.813 }, 00:13:52.813 "peer_address": { 00:13:52.813 "trtype": "TCP", 00:13:52.813 "adrfam": "IPv4", 00:13:52.813 "traddr": "10.0.0.1", 00:13:52.813 "trsvcid": "43974" 00:13:52.813 }, 00:13:52.813 "auth": { 00:13:52.813 "state": "completed", 00:13:52.813 "digest": "sha384", 00:13:52.813 "dhgroup": "ffdhe6144" 00:13:52.813 } 00:13:52.813 } 00:13:52.813 ]' 00:13:52.813 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.813 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:52.813 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.813 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:52.813 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.071 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.071 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.071 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.328 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:53.328 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:13:53.893 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.893 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:53.893 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.893 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.893 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.893 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.893 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:53.893 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.151 04:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.715 00:13:54.715 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.715 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.715 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.973 { 00:13:54.973 "cntlid": 83, 00:13:54.973 "qid": 0, 00:13:54.973 "state": "enabled", 00:13:54.973 "thread": "nvmf_tgt_poll_group_000", 00:13:54.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:54.973 "listen_address": { 00:13:54.973 "trtype": "TCP", 00:13:54.973 "adrfam": "IPv4", 00:13:54.973 "traddr": "10.0.0.3", 00:13:54.973 "trsvcid": "4420" 00:13:54.973 }, 00:13:54.973 "peer_address": { 00:13:54.973 "trtype": "TCP", 00:13:54.973 "adrfam": "IPv4", 00:13:54.973 "traddr": "10.0.0.1", 00:13:54.973 "trsvcid": "44000" 00:13:54.973 }, 00:13:54.973 "auth": { 00:13:54.973 "state": "completed", 00:13:54.973 "digest": "sha384", 00:13:54.973 "dhgroup": "ffdhe6144" 00:13:54.973 } 00:13:54.973 } 00:13:54.973 ]' 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:54.973 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.230 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.230 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.230 04:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.489 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:55.489 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:13:56.055 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.055 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:56.055 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.055 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.055 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.055 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.055 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:56.055 04:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.317 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.884 00:13:56.884 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.884 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.884 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.143 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.143 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.143 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.143 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.143 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.143 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.143 { 00:13:57.143 "cntlid": 85, 00:13:57.143 "qid": 0, 00:13:57.143 "state": "enabled", 00:13:57.143 "thread": "nvmf_tgt_poll_group_000", 00:13:57.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:57.143 "listen_address": { 00:13:57.143 "trtype": "TCP", 00:13:57.143 "adrfam": "IPv4", 00:13:57.143 "traddr": "10.0.0.3", 00:13:57.143 "trsvcid": "4420" 00:13:57.143 }, 00:13:57.143 "peer_address": { 00:13:57.143 "trtype": "TCP", 00:13:57.143 "adrfam": "IPv4", 00:13:57.143 "traddr": "10.0.0.1", 00:13:57.143 "trsvcid": "34122" 00:13:57.143 }, 00:13:57.143 "auth": { 00:13:57.143 "state": "completed", 00:13:57.143 "digest": "sha384", 00:13:57.143 "dhgroup": "ffdhe6144" 00:13:57.143 } 00:13:57.143 } 00:13:57.143 ]' 00:13:57.143 04:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.143 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.143 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.143 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:57.143 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.402 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.402 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.402 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.660 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:57.661 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:13:58.228 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.228 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:13:58.228 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.228 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.228 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.228 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.228 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:58.228 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.487 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.055 00:13:59.055 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.055 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.055 04:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.314 { 00:13:59.314 "cntlid": 87, 00:13:59.314 "qid": 0, 00:13:59.314 "state": "enabled", 00:13:59.314 "thread": "nvmf_tgt_poll_group_000", 00:13:59.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:13:59.314 "listen_address": { 00:13:59.314 "trtype": "TCP", 00:13:59.314 "adrfam": "IPv4", 00:13:59.314 "traddr": "10.0.0.3", 00:13:59.314 "trsvcid": "4420" 00:13:59.314 }, 00:13:59.314 "peer_address": { 00:13:59.314 "trtype": "TCP", 00:13:59.314 "adrfam": "IPv4", 00:13:59.314 "traddr": "10.0.0.1", 00:13:59.314 "trsvcid": "34136" 00:13:59.314 }, 00:13:59.314 "auth": { 00:13:59.314 "state": "completed", 00:13:59.314 "digest": "sha384", 00:13:59.314 "dhgroup": "ffdhe6144" 00:13:59.314 } 00:13:59.314 } 00:13:59.314 ]' 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.314 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.882 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:13:59.882 04:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:00.450 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.450 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:00.450 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.450 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.450 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.450 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:00.451 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.451 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:00.451 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:00.451 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.709 04:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.277 00:14:01.277 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.277 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.277 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.536 { 00:14:01.536 "cntlid": 89, 00:14:01.536 "qid": 0, 00:14:01.536 "state": "enabled", 00:14:01.536 "thread": "nvmf_tgt_poll_group_000", 00:14:01.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:01.536 "listen_address": { 00:14:01.536 "trtype": "TCP", 00:14:01.536 "adrfam": "IPv4", 00:14:01.536 "traddr": "10.0.0.3", 00:14:01.536 "trsvcid": "4420" 00:14:01.536 }, 00:14:01.536 "peer_address": { 00:14:01.536 "trtype": "TCP", 00:14:01.536 "adrfam": "IPv4", 00:14:01.536 "traddr": "10.0.0.1", 00:14:01.536 "trsvcid": "34164" 00:14:01.536 }, 00:14:01.536 "auth": { 00:14:01.536 "state": "completed", 00:14:01.536 "digest": "sha384", 00:14:01.536 "dhgroup": "ffdhe8192" 00:14:01.536 } 00:14:01.536 } 00:14:01.536 ]' 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.536 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.795 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.795 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.795 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.795 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.795 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.054 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:02.054 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:02.619 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.619 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:02.619 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.619 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.619 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.619 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.619 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:02.619 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.877 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.810 00:14:03.810 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.810 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.810 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.068 { 00:14:04.068 "cntlid": 91, 00:14:04.068 "qid": 0, 00:14:04.068 "state": "enabled", 00:14:04.068 "thread": "nvmf_tgt_poll_group_000", 00:14:04.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:04.068 "listen_address": { 00:14:04.068 "trtype": "TCP", 00:14:04.068 "adrfam": "IPv4", 00:14:04.068 "traddr": "10.0.0.3", 00:14:04.068 "trsvcid": "4420" 00:14:04.068 }, 00:14:04.068 "peer_address": { 00:14:04.068 "trtype": "TCP", 00:14:04.068 "adrfam": "IPv4", 00:14:04.068 "traddr": "10.0.0.1", 00:14:04.068 "trsvcid": "34210" 00:14:04.068 }, 00:14:04.068 "auth": { 00:14:04.068 "state": "completed", 00:14:04.068 "digest": "sha384", 00:14:04.068 "dhgroup": "ffdhe8192" 00:14:04.068 } 00:14:04.068 } 00:14:04.068 ]' 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.068 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.327 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:04.327 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:05.259 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.259 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:05.259 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.259 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.259 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.259 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.259 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:05.259 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:05.516 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:05.516 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.516 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.516 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:05.516 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:05.517 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.517 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.517 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.517 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.517 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.517 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.517 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.517 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.081 00:14:06.081 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.081 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.081 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.649 { 00:14:06.649 "cntlid": 93, 00:14:06.649 "qid": 0, 00:14:06.649 "state": "enabled", 00:14:06.649 "thread": "nvmf_tgt_poll_group_000", 00:14:06.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:06.649 "listen_address": { 00:14:06.649 "trtype": "TCP", 00:14:06.649 "adrfam": "IPv4", 00:14:06.649 "traddr": "10.0.0.3", 00:14:06.649 "trsvcid": "4420" 00:14:06.649 }, 00:14:06.649 "peer_address": { 00:14:06.649 "trtype": "TCP", 00:14:06.649 "adrfam": "IPv4", 00:14:06.649 "traddr": "10.0.0.1", 00:14:06.649 "trsvcid": "34244" 00:14:06.649 }, 00:14:06.649 "auth": { 00:14:06.649 "state": "completed", 00:14:06.649 "digest": "sha384", 00:14:06.649 "dhgroup": "ffdhe8192" 00:14:06.649 } 00:14:06.649 } 00:14:06.649 ]' 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.649 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.907 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:06.907 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.842 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:08.101 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.101 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.668 00:14:08.668 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.668 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.668 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.927 { 00:14:08.927 "cntlid": 95, 00:14:08.927 "qid": 0, 00:14:08.927 "state": "enabled", 00:14:08.927 "thread": "nvmf_tgt_poll_group_000", 00:14:08.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:08.927 "listen_address": { 00:14:08.927 "trtype": "TCP", 00:14:08.927 "adrfam": "IPv4", 00:14:08.927 "traddr": "10.0.0.3", 00:14:08.927 "trsvcid": "4420" 00:14:08.927 }, 00:14:08.927 "peer_address": { 00:14:08.927 "trtype": "TCP", 00:14:08.927 "adrfam": "IPv4", 00:14:08.927 "traddr": "10.0.0.1", 00:14:08.927 "trsvcid": "40700" 00:14:08.927 }, 00:14:08.927 "auth": { 00:14:08.927 "state": "completed", 00:14:08.927 "digest": "sha384", 00:14:08.927 "dhgroup": "ffdhe8192" 00:14:08.927 } 00:14:08.927 } 00:14:08.927 ]' 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:08.927 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.185 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.185 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.185 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.444 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:09.444 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:10.011 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.270 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.528 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.528 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.528 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.528 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.787 00:14:10.787 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.787 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.787 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.046 { 00:14:11.046 "cntlid": 97, 00:14:11.046 "qid": 0, 00:14:11.046 "state": "enabled", 00:14:11.046 "thread": "nvmf_tgt_poll_group_000", 00:14:11.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:11.046 "listen_address": { 00:14:11.046 "trtype": "TCP", 00:14:11.046 "adrfam": "IPv4", 00:14:11.046 "traddr": "10.0.0.3", 00:14:11.046 "trsvcid": "4420" 00:14:11.046 }, 00:14:11.046 "peer_address": { 00:14:11.046 "trtype": "TCP", 00:14:11.046 "adrfam": "IPv4", 00:14:11.046 "traddr": "10.0.0.1", 00:14:11.046 "trsvcid": "40716" 00:14:11.046 }, 00:14:11.046 "auth": { 00:14:11.046 "state": "completed", 00:14:11.046 "digest": "sha512", 00:14:11.046 "dhgroup": "null" 00:14:11.046 } 00:14:11.046 } 00:14:11.046 ]' 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:11.046 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.303 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.303 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.303 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.559 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:11.559 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:12.124 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.124 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:12.124 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.124 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.124 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.124 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.124 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:12.124 04:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.382 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.948 00:14:12.948 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.948 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.948 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.206 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.206 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.206 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.206 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.206 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.206 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.206 { 00:14:13.206 "cntlid": 99, 00:14:13.206 "qid": 0, 00:14:13.206 "state": "enabled", 00:14:13.206 "thread": "nvmf_tgt_poll_group_000", 00:14:13.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:13.206 "listen_address": { 00:14:13.206 "trtype": "TCP", 00:14:13.206 "adrfam": "IPv4", 00:14:13.206 "traddr": "10.0.0.3", 00:14:13.206 "trsvcid": "4420" 00:14:13.206 }, 00:14:13.206 "peer_address": { 00:14:13.206 "trtype": "TCP", 00:14:13.206 "adrfam": "IPv4", 00:14:13.206 "traddr": "10.0.0.1", 00:14:13.206 "trsvcid": "40750" 00:14:13.206 }, 00:14:13.206 "auth": { 00:14:13.206 "state": "completed", 00:14:13.206 "digest": "sha512", 00:14:13.206 "dhgroup": "null" 00:14:13.206 } 00:14:13.206 } 00:14:13.206 ]' 00:14:13.206 04:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.206 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.206 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.206 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:13.206 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.206 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.206 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.206 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.771 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:13.771 04:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:14.336 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.336 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:14.336 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.336 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.336 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.336 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.336 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:14.336 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.594 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.851 00:14:15.109 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.109 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.109 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.367 { 00:14:15.367 "cntlid": 101, 00:14:15.367 "qid": 0, 00:14:15.367 "state": "enabled", 00:14:15.367 "thread": "nvmf_tgt_poll_group_000", 00:14:15.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:15.367 "listen_address": { 00:14:15.367 "trtype": "TCP", 00:14:15.367 "adrfam": "IPv4", 00:14:15.367 "traddr": "10.0.0.3", 00:14:15.367 "trsvcid": "4420" 00:14:15.367 }, 00:14:15.367 "peer_address": { 00:14:15.367 "trtype": "TCP", 00:14:15.367 "adrfam": "IPv4", 00:14:15.367 "traddr": "10.0.0.1", 00:14:15.367 "trsvcid": "40774" 00:14:15.367 }, 00:14:15.367 "auth": { 00:14:15.367 "state": "completed", 00:14:15.367 "digest": "sha512", 00:14:15.367 "dhgroup": "null" 00:14:15.367 } 00:14:15.367 } 00:14:15.367 ]' 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.367 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.625 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:15.625 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.560 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.818 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.819 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.819 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.819 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.077 00:14:17.077 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.077 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.077 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.335 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.335 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.335 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.335 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.335 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.335 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.335 { 00:14:17.335 "cntlid": 103, 00:14:17.335 "qid": 0, 00:14:17.335 "state": "enabled", 00:14:17.335 "thread": "nvmf_tgt_poll_group_000", 00:14:17.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:17.336 "listen_address": { 00:14:17.336 "trtype": "TCP", 00:14:17.336 "adrfam": "IPv4", 00:14:17.336 "traddr": "10.0.0.3", 00:14:17.336 "trsvcid": "4420" 00:14:17.336 }, 00:14:17.336 "peer_address": { 00:14:17.336 "trtype": "TCP", 00:14:17.336 "adrfam": "IPv4", 00:14:17.336 "traddr": "10.0.0.1", 00:14:17.336 "trsvcid": "52940" 00:14:17.336 }, 00:14:17.336 "auth": { 00:14:17.336 "state": "completed", 00:14:17.336 "digest": "sha512", 00:14:17.336 "dhgroup": "null" 00:14:17.336 } 00:14:17.336 } 00:14:17.336 ]' 00:14:17.336 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.336 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:17.336 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.336 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:17.336 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.336 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.336 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.336 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.598 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:17.598 04:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.534 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.101 00:14:19.101 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.101 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.102 04:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.359 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.359 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.360 { 00:14:19.360 "cntlid": 105, 00:14:19.360 "qid": 0, 00:14:19.360 "state": "enabled", 00:14:19.360 "thread": "nvmf_tgt_poll_group_000", 00:14:19.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:19.360 "listen_address": { 00:14:19.360 "trtype": "TCP", 00:14:19.360 "adrfam": "IPv4", 00:14:19.360 "traddr": "10.0.0.3", 00:14:19.360 "trsvcid": "4420" 00:14:19.360 }, 00:14:19.360 "peer_address": { 00:14:19.360 "trtype": "TCP", 00:14:19.360 "adrfam": "IPv4", 00:14:19.360 "traddr": "10.0.0.1", 00:14:19.360 "trsvcid": "52968" 00:14:19.360 }, 00:14:19.360 "auth": { 00:14:19.360 "state": "completed", 00:14:19.360 "digest": "sha512", 00:14:19.360 "dhgroup": "ffdhe2048" 00:14:19.360 } 00:14:19.360 } 00:14:19.360 ]' 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.360 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.618 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:19.618 04:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.624 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.888 00:14:21.145 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.145 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.145 04:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.404 { 00:14:21.404 "cntlid": 107, 00:14:21.404 "qid": 0, 00:14:21.404 "state": "enabled", 00:14:21.404 "thread": "nvmf_tgt_poll_group_000", 00:14:21.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:21.404 "listen_address": { 00:14:21.404 "trtype": "TCP", 00:14:21.404 "adrfam": "IPv4", 00:14:21.404 "traddr": "10.0.0.3", 00:14:21.404 "trsvcid": "4420" 00:14:21.404 }, 00:14:21.404 "peer_address": { 00:14:21.404 "trtype": "TCP", 00:14:21.404 "adrfam": "IPv4", 00:14:21.404 "traddr": "10.0.0.1", 00:14:21.404 "trsvcid": "52990" 00:14:21.404 }, 00:14:21.404 "auth": { 00:14:21.404 "state": "completed", 00:14:21.404 "digest": "sha512", 00:14:21.404 "dhgroup": "ffdhe2048" 00:14:21.404 } 00:14:21.404 } 00:14:21.404 ]' 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.404 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.968 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:21.968 04:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:22.532 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.532 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:22.532 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.532 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.532 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.532 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.532 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:22.532 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.789 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.047 00:14:23.047 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.047 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.047 04:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.304 { 00:14:23.304 "cntlid": 109, 00:14:23.304 "qid": 0, 00:14:23.304 "state": "enabled", 00:14:23.304 "thread": "nvmf_tgt_poll_group_000", 00:14:23.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:23.304 "listen_address": { 00:14:23.304 "trtype": "TCP", 00:14:23.304 "adrfam": "IPv4", 00:14:23.304 "traddr": "10.0.0.3", 00:14:23.304 "trsvcid": "4420" 00:14:23.304 }, 00:14:23.304 "peer_address": { 00:14:23.304 "trtype": "TCP", 00:14:23.304 "adrfam": "IPv4", 00:14:23.304 "traddr": "10.0.0.1", 00:14:23.304 "trsvcid": "53020" 00:14:23.304 }, 00:14:23.304 "auth": { 00:14:23.304 "state": "completed", 00:14:23.304 "digest": "sha512", 00:14:23.304 "dhgroup": "ffdhe2048" 00:14:23.304 } 00:14:23.304 } 00:14:23.304 ]' 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.304 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.561 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:23.561 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.561 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.561 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.561 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.818 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:23.818 04:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:24.383 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.640 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:24.640 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.640 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.640 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.640 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.640 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:24.640 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:24.897 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.898 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.155 00:14:25.155 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.155 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.155 04:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.412 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.412 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.412 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.412 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.412 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.412 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.412 { 00:14:25.412 "cntlid": 111, 00:14:25.412 "qid": 0, 00:14:25.412 "state": "enabled", 00:14:25.412 "thread": "nvmf_tgt_poll_group_000", 00:14:25.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:25.412 "listen_address": { 00:14:25.412 "trtype": "TCP", 00:14:25.412 "adrfam": "IPv4", 00:14:25.412 "traddr": "10.0.0.3", 00:14:25.412 "trsvcid": "4420" 00:14:25.412 }, 00:14:25.412 "peer_address": { 00:14:25.412 "trtype": "TCP", 00:14:25.412 "adrfam": "IPv4", 00:14:25.412 "traddr": "10.0.0.1", 00:14:25.412 "trsvcid": "53052" 00:14:25.412 }, 00:14:25.412 "auth": { 00:14:25.412 "state": "completed", 00:14:25.412 "digest": "sha512", 00:14:25.412 "dhgroup": "ffdhe2048" 00:14:25.412 } 00:14:25.412 } 00:14:25.412 ]' 00:14:25.412 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.670 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.670 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.670 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:25.670 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.670 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.670 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.670 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.928 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:25.928 04:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.862 04:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.428 00:14:27.428 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.428 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.428 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.687 { 00:14:27.687 "cntlid": 113, 00:14:27.687 "qid": 0, 00:14:27.687 "state": "enabled", 00:14:27.687 "thread": "nvmf_tgt_poll_group_000", 00:14:27.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:27.687 "listen_address": { 00:14:27.687 "trtype": "TCP", 00:14:27.687 "adrfam": "IPv4", 00:14:27.687 "traddr": "10.0.0.3", 00:14:27.687 "trsvcid": "4420" 00:14:27.687 }, 00:14:27.687 "peer_address": { 00:14:27.687 "trtype": "TCP", 00:14:27.687 "adrfam": "IPv4", 00:14:27.687 "traddr": "10.0.0.1", 00:14:27.687 "trsvcid": "46018" 00:14:27.687 }, 00:14:27.687 "auth": { 00:14:27.687 "state": "completed", 00:14:27.687 "digest": "sha512", 00:14:27.687 "dhgroup": "ffdhe3072" 00:14:27.687 } 00:14:27.687 } 00:14:27.687 ]' 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.687 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.254 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:28.254 04:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:28.886 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.886 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:28.886 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.886 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.886 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.886 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.886 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:28.886 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.144 04:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.402 00:14:29.403 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.403 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.403 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.660 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.660 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.660 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.660 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.919 { 00:14:29.919 "cntlid": 115, 00:14:29.919 "qid": 0, 00:14:29.919 "state": "enabled", 00:14:29.919 "thread": "nvmf_tgt_poll_group_000", 00:14:29.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:29.919 "listen_address": { 00:14:29.919 "trtype": "TCP", 00:14:29.919 "adrfam": "IPv4", 00:14:29.919 "traddr": "10.0.0.3", 00:14:29.919 "trsvcid": "4420" 00:14:29.919 }, 00:14:29.919 "peer_address": { 00:14:29.919 "trtype": "TCP", 00:14:29.919 "adrfam": "IPv4", 00:14:29.919 "traddr": "10.0.0.1", 00:14:29.919 "trsvcid": "46036" 00:14:29.919 }, 00:14:29.919 "auth": { 00:14:29.919 "state": "completed", 00:14:29.919 "digest": "sha512", 00:14:29.919 "dhgroup": "ffdhe3072" 00:14:29.919 } 00:14:29.919 } 00:14:29.919 ]' 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.919 04:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.177 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:30.177 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:30.742 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.742 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:30.742 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.742 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.742 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.742 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.742 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:30.742 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.000 04:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.565 00:14:31.565 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.565 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.565 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.823 { 00:14:31.823 "cntlid": 117, 00:14:31.823 "qid": 0, 00:14:31.823 "state": "enabled", 00:14:31.823 "thread": "nvmf_tgt_poll_group_000", 00:14:31.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:31.823 "listen_address": { 00:14:31.823 "trtype": "TCP", 00:14:31.823 "adrfam": "IPv4", 00:14:31.823 "traddr": "10.0.0.3", 00:14:31.823 "trsvcid": "4420" 00:14:31.823 }, 00:14:31.823 "peer_address": { 00:14:31.823 "trtype": "TCP", 00:14:31.823 "adrfam": "IPv4", 00:14:31.823 "traddr": "10.0.0.1", 00:14:31.823 "trsvcid": "46052" 00:14:31.823 }, 00:14:31.823 "auth": { 00:14:31.823 "state": "completed", 00:14:31.823 "digest": "sha512", 00:14:31.823 "dhgroup": "ffdhe3072" 00:14:31.823 } 00:14:31.823 } 00:14:31.823 ]' 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.823 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.081 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:32.081 04:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.013 04:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.584 00:14:33.584 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.584 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.584 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.841 { 00:14:33.841 "cntlid": 119, 00:14:33.841 "qid": 0, 00:14:33.841 "state": "enabled", 00:14:33.841 "thread": "nvmf_tgt_poll_group_000", 00:14:33.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:33.841 "listen_address": { 00:14:33.841 "trtype": "TCP", 00:14:33.841 "adrfam": "IPv4", 00:14:33.841 "traddr": "10.0.0.3", 00:14:33.841 "trsvcid": "4420" 00:14:33.841 }, 00:14:33.841 "peer_address": { 00:14:33.841 "trtype": "TCP", 00:14:33.841 "adrfam": "IPv4", 00:14:33.841 "traddr": "10.0.0.1", 00:14:33.841 "trsvcid": "46088" 00:14:33.841 }, 00:14:33.841 "auth": { 00:14:33.841 "state": "completed", 00:14:33.841 "digest": "sha512", 00:14:33.841 "dhgroup": "ffdhe3072" 00:14:33.841 } 00:14:33.841 } 00:14:33.841 ]' 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.841 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.098 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:34.098 04:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:35.041 04:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.298 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.560 00:14:35.560 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.560 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.560 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.835 { 00:14:35.835 "cntlid": 121, 00:14:35.835 "qid": 0, 00:14:35.835 "state": "enabled", 00:14:35.835 "thread": "nvmf_tgt_poll_group_000", 00:14:35.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:35.835 "listen_address": { 00:14:35.835 "trtype": "TCP", 00:14:35.835 "adrfam": "IPv4", 00:14:35.835 "traddr": "10.0.0.3", 00:14:35.835 "trsvcid": "4420" 00:14:35.835 }, 00:14:35.835 "peer_address": { 00:14:35.835 "trtype": "TCP", 00:14:35.835 "adrfam": "IPv4", 00:14:35.835 "traddr": "10.0.0.1", 00:14:35.835 "trsvcid": "46124" 00:14:35.835 }, 00:14:35.835 "auth": { 00:14:35.835 "state": "completed", 00:14:35.835 "digest": "sha512", 00:14:35.835 "dhgroup": "ffdhe4096" 00:14:35.835 } 00:14:35.835 } 00:14:35.835 ]' 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.835 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.093 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:36.093 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.093 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.093 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.093 04:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.352 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:36.352 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:36.919 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.177 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:37.177 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.177 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.177 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.177 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.177 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:37.177 04:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.436 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.695 00:14:37.695 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.695 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.695 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.955 { 00:14:37.955 "cntlid": 123, 00:14:37.955 "qid": 0, 00:14:37.955 "state": "enabled", 00:14:37.955 "thread": "nvmf_tgt_poll_group_000", 00:14:37.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:37.955 "listen_address": { 00:14:37.955 "trtype": "TCP", 00:14:37.955 "adrfam": "IPv4", 00:14:37.955 "traddr": "10.0.0.3", 00:14:37.955 "trsvcid": "4420" 00:14:37.955 }, 00:14:37.955 "peer_address": { 00:14:37.955 "trtype": "TCP", 00:14:37.955 "adrfam": "IPv4", 00:14:37.955 "traddr": "10.0.0.1", 00:14:37.955 "trsvcid": "43468" 00:14:37.955 }, 00:14:37.955 "auth": { 00:14:37.955 "state": "completed", 00:14:37.955 "digest": "sha512", 00:14:37.955 "dhgroup": "ffdhe4096" 00:14:37.955 } 00:14:37.955 } 00:14:37.955 ]' 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.955 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.213 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.214 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.214 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.214 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.214 04:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.471 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:38.471 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:39.035 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.035 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:39.035 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.035 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.293 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.293 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.293 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:39.293 04:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.551 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.810 00:14:39.810 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.810 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.810 04:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.376 { 00:14:40.376 "cntlid": 125, 00:14:40.376 "qid": 0, 00:14:40.376 "state": "enabled", 00:14:40.376 "thread": "nvmf_tgt_poll_group_000", 00:14:40.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:40.376 "listen_address": { 00:14:40.376 "trtype": "TCP", 00:14:40.376 "adrfam": "IPv4", 00:14:40.376 "traddr": "10.0.0.3", 00:14:40.376 "trsvcid": "4420" 00:14:40.376 }, 00:14:40.376 "peer_address": { 00:14:40.376 "trtype": "TCP", 00:14:40.376 "adrfam": "IPv4", 00:14:40.376 "traddr": "10.0.0.1", 00:14:40.376 "trsvcid": "43502" 00:14:40.376 }, 00:14:40.376 "auth": { 00:14:40.376 "state": "completed", 00:14:40.376 "digest": "sha512", 00:14:40.376 "dhgroup": "ffdhe4096" 00:14:40.376 } 00:14:40.376 } 00:14:40.376 ]' 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.376 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.635 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:40.635 04:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:41.200 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.200 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:41.200 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.200 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.200 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.200 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.200 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:41.200 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:41.457 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:41.457 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.457 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.457 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.457 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:41.457 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.457 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:14:41.458 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.458 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.458 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.458 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:41.458 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.458 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.021 00:14:42.021 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.021 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.021 04:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.311 { 00:14:42.311 "cntlid": 127, 00:14:42.311 "qid": 0, 00:14:42.311 "state": "enabled", 00:14:42.311 "thread": "nvmf_tgt_poll_group_000", 00:14:42.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:42.311 "listen_address": { 00:14:42.311 "trtype": "TCP", 00:14:42.311 "adrfam": "IPv4", 00:14:42.311 "traddr": "10.0.0.3", 00:14:42.311 "trsvcid": "4420" 00:14:42.311 }, 00:14:42.311 "peer_address": { 00:14:42.311 "trtype": "TCP", 00:14:42.311 "adrfam": "IPv4", 00:14:42.311 "traddr": "10.0.0.1", 00:14:42.311 "trsvcid": "43522" 00:14:42.311 }, 00:14:42.311 "auth": { 00:14:42.311 "state": "completed", 00:14:42.311 "digest": "sha512", 00:14:42.311 "dhgroup": "ffdhe4096" 00:14:42.311 } 00:14:42.311 } 00:14:42.311 ]' 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.311 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.568 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.568 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.568 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.825 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:42.825 04:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.756 04:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.320 00:14:44.320 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.320 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.320 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.576 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.576 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.576 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.577 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.577 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.577 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.577 { 00:14:44.577 "cntlid": 129, 00:14:44.577 "qid": 0, 00:14:44.577 "state": "enabled", 00:14:44.577 "thread": "nvmf_tgt_poll_group_000", 00:14:44.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:44.577 "listen_address": { 00:14:44.577 "trtype": "TCP", 00:14:44.577 "adrfam": "IPv4", 00:14:44.577 "traddr": "10.0.0.3", 00:14:44.577 "trsvcid": "4420" 00:14:44.577 }, 00:14:44.577 "peer_address": { 00:14:44.577 "trtype": "TCP", 00:14:44.577 "adrfam": "IPv4", 00:14:44.577 "traddr": "10.0.0.1", 00:14:44.577 "trsvcid": "43556" 00:14:44.577 }, 00:14:44.577 "auth": { 00:14:44.577 "state": "completed", 00:14:44.577 "digest": "sha512", 00:14:44.577 "dhgroup": "ffdhe6144" 00:14:44.577 } 00:14:44.577 } 00:14:44.577 ]' 00:14:44.577 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.577 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.577 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.833 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.833 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.833 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.833 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.833 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.204 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:45.204 04:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:45.770 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.770 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:45.770 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.770 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.770 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.770 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.770 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:45.770 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.028 04:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.594 00:14:46.594 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.594 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.594 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.852 { 00:14:46.852 "cntlid": 131, 00:14:46.852 "qid": 0, 00:14:46.852 "state": "enabled", 00:14:46.852 "thread": "nvmf_tgt_poll_group_000", 00:14:46.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:46.852 "listen_address": { 00:14:46.852 "trtype": "TCP", 00:14:46.852 "adrfam": "IPv4", 00:14:46.852 "traddr": "10.0.0.3", 00:14:46.852 "trsvcid": "4420" 00:14:46.852 }, 00:14:46.852 "peer_address": { 00:14:46.852 "trtype": "TCP", 00:14:46.852 "adrfam": "IPv4", 00:14:46.852 "traddr": "10.0.0.1", 00:14:46.852 "trsvcid": "43584" 00:14:46.852 }, 00:14:46.852 "auth": { 00:14:46.852 "state": "completed", 00:14:46.852 "digest": "sha512", 00:14:46.852 "dhgroup": "ffdhe6144" 00:14:46.852 } 00:14:46.852 } 00:14:46.852 ]' 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.852 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.110 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.111 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.111 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.111 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.111 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.370 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:47.370 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:47.938 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.938 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:47.938 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.938 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.938 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.938 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.938 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:47.938 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.504 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.762 00:14:48.762 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.762 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.762 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.020 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.020 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.020 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.020 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.020 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.020 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.020 { 00:14:49.020 "cntlid": 133, 00:14:49.020 "qid": 0, 00:14:49.021 "state": "enabled", 00:14:49.021 "thread": "nvmf_tgt_poll_group_000", 00:14:49.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:49.021 "listen_address": { 00:14:49.021 "trtype": "TCP", 00:14:49.021 "adrfam": "IPv4", 00:14:49.021 "traddr": "10.0.0.3", 00:14:49.021 "trsvcid": "4420" 00:14:49.021 }, 00:14:49.021 "peer_address": { 00:14:49.021 "trtype": "TCP", 00:14:49.021 "adrfam": "IPv4", 00:14:49.021 "traddr": "10.0.0.1", 00:14:49.021 "trsvcid": "56952" 00:14:49.021 }, 00:14:49.021 "auth": { 00:14:49.021 "state": "completed", 00:14:49.021 "digest": "sha512", 00:14:49.021 "dhgroup": "ffdhe6144" 00:14:49.021 } 00:14:49.021 } 00:14:49.021 ]' 00:14:49.021 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.280 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:49.280 04:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.280 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.280 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.280 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.280 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.280 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.548 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:49.548 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:50.115 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.115 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:50.115 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.115 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.115 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.115 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.115 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:50.115 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.373 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.632 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.632 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:50.632 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.632 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.889 00:14:50.889 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.889 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.889 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.456 { 00:14:51.456 "cntlid": 135, 00:14:51.456 "qid": 0, 00:14:51.456 "state": "enabled", 00:14:51.456 "thread": "nvmf_tgt_poll_group_000", 00:14:51.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:51.456 "listen_address": { 00:14:51.456 "trtype": "TCP", 00:14:51.456 "adrfam": "IPv4", 00:14:51.456 "traddr": "10.0.0.3", 00:14:51.456 "trsvcid": "4420" 00:14:51.456 }, 00:14:51.456 "peer_address": { 00:14:51.456 "trtype": "TCP", 00:14:51.456 "adrfam": "IPv4", 00:14:51.456 "traddr": "10.0.0.1", 00:14:51.456 "trsvcid": "56992" 00:14:51.456 }, 00:14:51.456 "auth": { 00:14:51.456 "state": "completed", 00:14:51.456 "digest": "sha512", 00:14:51.456 "dhgroup": "ffdhe6144" 00:14:51.456 } 00:14:51.456 } 00:14:51.456 ]' 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.456 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.761 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:51.761 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:14:52.696 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.697 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.262 00:14:53.262 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.262 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.262 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.835 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.835 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.835 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.836 { 00:14:53.836 "cntlid": 137, 00:14:53.836 "qid": 0, 00:14:53.836 "state": "enabled", 00:14:53.836 "thread": "nvmf_tgt_poll_group_000", 00:14:53.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:53.836 "listen_address": { 00:14:53.836 "trtype": "TCP", 00:14:53.836 "adrfam": "IPv4", 00:14:53.836 "traddr": "10.0.0.3", 00:14:53.836 "trsvcid": "4420" 00:14:53.836 }, 00:14:53.836 "peer_address": { 00:14:53.836 "trtype": "TCP", 00:14:53.836 "adrfam": "IPv4", 00:14:53.836 "traddr": "10.0.0.1", 00:14:53.836 "trsvcid": "57026" 00:14:53.836 }, 00:14:53.836 "auth": { 00:14:53.836 "state": "completed", 00:14:53.836 "digest": "sha512", 00:14:53.836 "dhgroup": "ffdhe8192" 00:14:53.836 } 00:14:53.836 } 00:14:53.836 ]' 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.836 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.095 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:54.095 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:14:55.031 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.031 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:55.031 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.031 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.031 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.031 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.031 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:55.031 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.291 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.859 00:14:55.859 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.859 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.859 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.118 { 00:14:56.118 "cntlid": 139, 00:14:56.118 "qid": 0, 00:14:56.118 "state": "enabled", 00:14:56.118 "thread": "nvmf_tgt_poll_group_000", 00:14:56.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:56.118 "listen_address": { 00:14:56.118 "trtype": "TCP", 00:14:56.118 "adrfam": "IPv4", 00:14:56.118 "traddr": "10.0.0.3", 00:14:56.118 "trsvcid": "4420" 00:14:56.118 }, 00:14:56.118 "peer_address": { 00:14:56.118 "trtype": "TCP", 00:14:56.118 "adrfam": "IPv4", 00:14:56.118 "traddr": "10.0.0.1", 00:14:56.118 "trsvcid": "57062" 00:14:56.118 }, 00:14:56.118 "auth": { 00:14:56.118 "state": "completed", 00:14:56.118 "digest": "sha512", 00:14:56.118 "dhgroup": "ffdhe8192" 00:14:56.118 } 00:14:56.118 } 00:14:56.118 ]' 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.118 04:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.118 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.118 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.118 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.376 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:56.376 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: --dhchap-ctrl-secret DHHC-1:02:ZmIzYzEyNTE0Yzg5YTBmMDJmZjBmMTM3ODFlNzJiYTgxODMwOTg2ZjFiNTZiNmI1MDQ07Q==: 00:14:57.313 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.313 04:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:57.313 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.313 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.313 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.313 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.313 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:57.313 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.574 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.141 00:14:58.141 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.141 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.141 04:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.400 { 00:14:58.400 "cntlid": 141, 00:14:58.400 "qid": 0, 00:14:58.400 "state": "enabled", 00:14:58.400 "thread": "nvmf_tgt_poll_group_000", 00:14:58.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:14:58.400 "listen_address": { 00:14:58.400 "trtype": "TCP", 00:14:58.400 "adrfam": "IPv4", 00:14:58.400 "traddr": "10.0.0.3", 00:14:58.400 "trsvcid": "4420" 00:14:58.400 }, 00:14:58.400 "peer_address": { 00:14:58.400 "trtype": "TCP", 00:14:58.400 "adrfam": "IPv4", 00:14:58.400 "traddr": "10.0.0.1", 00:14:58.400 "trsvcid": "55798" 00:14:58.400 }, 00:14:58.400 "auth": { 00:14:58.400 "state": "completed", 00:14:58.400 "digest": "sha512", 00:14:58.400 "dhgroup": "ffdhe8192" 00:14:58.400 } 00:14:58.400 } 00:14:58.400 ]' 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.400 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.658 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:58.658 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:01:NDVkZDk1OWRjOTJmYmM3ZjM5NWI5N2I2ZmIxNjcyOTmus8P9: 00:14:59.592 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.592 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:14:59.592 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.592 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.592 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.593 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.160 00:15:00.417 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.417 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.417 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.674 { 00:15:00.674 "cntlid": 143, 00:15:00.674 "qid": 0, 00:15:00.674 "state": "enabled", 00:15:00.674 "thread": "nvmf_tgt_poll_group_000", 00:15:00.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:00.674 "listen_address": { 00:15:00.674 "trtype": "TCP", 00:15:00.674 "adrfam": "IPv4", 00:15:00.674 "traddr": "10.0.0.3", 00:15:00.674 "trsvcid": "4420" 00:15:00.674 }, 00:15:00.674 "peer_address": { 00:15:00.674 "trtype": "TCP", 00:15:00.674 "adrfam": "IPv4", 00:15:00.674 "traddr": "10.0.0.1", 00:15:00.674 "trsvcid": "55824" 00:15:00.674 }, 00:15:00.674 "auth": { 00:15:00.674 "state": "completed", 00:15:00.674 "digest": "sha512", 00:15:00.674 "dhgroup": "ffdhe8192" 00:15:00.674 } 00:15:00.674 } 00:15:00.674 ]' 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.674 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.932 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:15:00.932 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:15:01.864 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.865 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.796 00:15:02.796 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.796 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.796 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.796 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.796 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.796 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.796 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.053 { 00:15:03.053 "cntlid": 145, 00:15:03.053 "qid": 0, 00:15:03.053 "state": "enabled", 00:15:03.053 "thread": "nvmf_tgt_poll_group_000", 00:15:03.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:03.053 "listen_address": { 00:15:03.053 "trtype": "TCP", 00:15:03.053 "adrfam": "IPv4", 00:15:03.053 "traddr": "10.0.0.3", 00:15:03.053 "trsvcid": "4420" 00:15:03.053 }, 00:15:03.053 "peer_address": { 00:15:03.053 "trtype": "TCP", 00:15:03.053 "adrfam": "IPv4", 00:15:03.053 "traddr": "10.0.0.1", 00:15:03.053 "trsvcid": "55854" 00:15:03.053 }, 00:15:03.053 "auth": { 00:15:03.053 "state": "completed", 00:15:03.053 "digest": "sha512", 00:15:03.053 "dhgroup": "ffdhe8192" 00:15:03.053 } 00:15:03.053 } 00:15:03.053 ]' 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.053 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.317 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:15:03.317 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:00:ODkzYzFhYmRkYmI5OWEyNGZlNGIyZTVlNzkxNjBlMDJmN2M1Yjg1NGYyYmU3MGIw2kZeLg==: --dhchap-ctrl-secret DHHC-1:03:MjYzNjBlZWVhY2JiOTIzOGYyMGZkNDI4YzQ2ZjMwNmFlMThlZGUwODlhZTczYzNiNjVhZWEzZmQyYTcyOWNlMH4dAuI=: 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:04.254 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:04.821 request: 00:15:04.821 { 00:15:04.822 "name": "nvme0", 00:15:04.822 "trtype": "tcp", 00:15:04.822 "traddr": "10.0.0.3", 00:15:04.822 "adrfam": "ipv4", 00:15:04.822 "trsvcid": "4420", 00:15:04.822 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:04.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:04.822 "prchk_reftag": false, 00:15:04.822 "prchk_guard": false, 00:15:04.822 "hdgst": false, 00:15:04.822 "ddgst": false, 00:15:04.822 "dhchap_key": "key2", 00:15:04.822 "allow_unrecognized_csi": false, 00:15:04.822 "method": "bdev_nvme_attach_controller", 00:15:04.822 "req_id": 1 00:15:04.822 } 00:15:04.822 Got JSON-RPC error response 00:15:04.822 response: 00:15:04.822 { 00:15:04.822 "code": -5, 00:15:04.822 "message": "Input/output error" 00:15:04.822 } 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.822 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:05.389 request: 00:15:05.389 { 00:15:05.389 "name": "nvme0", 00:15:05.389 "trtype": "tcp", 00:15:05.389 "traddr": "10.0.0.3", 00:15:05.389 "adrfam": "ipv4", 00:15:05.389 "trsvcid": "4420", 00:15:05.389 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:05.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:05.389 "prchk_reftag": false, 00:15:05.389 "prchk_guard": false, 00:15:05.389 "hdgst": false, 00:15:05.389 "ddgst": false, 00:15:05.389 "dhchap_key": "key1", 00:15:05.389 "dhchap_ctrlr_key": "ckey2", 00:15:05.389 "allow_unrecognized_csi": false, 00:15:05.389 "method": "bdev_nvme_attach_controller", 00:15:05.389 "req_id": 1 00:15:05.389 } 00:15:05.389 Got JSON-RPC error response 00:15:05.389 response: 00:15:05.389 { 00:15:05.389 "code": -5, 00:15:05.389 "message": "Input/output error" 00:15:05.389 } 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.389 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.957 request: 00:15:05.957 { 00:15:05.957 "name": "nvme0", 00:15:05.957 "trtype": "tcp", 00:15:05.957 "traddr": "10.0.0.3", 00:15:05.957 "adrfam": "ipv4", 00:15:05.957 "trsvcid": "4420", 00:15:05.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:05.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:05.957 "prchk_reftag": false, 00:15:05.957 "prchk_guard": false, 00:15:05.957 "hdgst": false, 00:15:05.957 "ddgst": false, 00:15:05.957 "dhchap_key": "key1", 00:15:05.957 "dhchap_ctrlr_key": "ckey1", 00:15:05.957 "allow_unrecognized_csi": false, 00:15:05.957 "method": "bdev_nvme_attach_controller", 00:15:05.957 "req_id": 1 00:15:05.957 } 00:15:05.957 Got JSON-RPC error response 00:15:05.957 response: 00:15:05.957 { 00:15:05.957 "code": -5, 00:15:05.957 "message": "Input/output error" 00:15:05.957 } 00:15:05.957 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67685 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67685 ']' 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67685 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67685 00:15:05.958 killing process with pid 67685 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67685' 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67685 00:15:05.958 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67685 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70768 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70768 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70768 ']' 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.217 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70768 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70768 ']' 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.475 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.733 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.733 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:06.733 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:06.733 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.733 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.991 null0 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ekf 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0bi ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0bi 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hjh 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.aBV ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aBV 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Pis 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.zgQ ]] 00:15:06.991 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zgQ 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.wzh 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.992 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.367 nvme0n1 00:15:08.367 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.367 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.367 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.367 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.367 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.367 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.367 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.367 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.367 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.367 { 00:15:08.367 "cntlid": 1, 00:15:08.367 "qid": 0, 00:15:08.367 "state": "enabled", 00:15:08.367 "thread": "nvmf_tgt_poll_group_000", 00:15:08.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:08.368 "listen_address": { 00:15:08.368 "trtype": "TCP", 00:15:08.368 "adrfam": "IPv4", 00:15:08.368 "traddr": "10.0.0.3", 00:15:08.368 "trsvcid": "4420" 00:15:08.368 }, 00:15:08.368 "peer_address": { 00:15:08.368 "trtype": "TCP", 00:15:08.368 "adrfam": "IPv4", 00:15:08.368 "traddr": "10.0.0.1", 00:15:08.368 "trsvcid": "34356" 00:15:08.368 }, 00:15:08.368 "auth": { 00:15:08.368 "state": "completed", 00:15:08.368 "digest": "sha512", 00:15:08.368 "dhgroup": "ffdhe8192" 00:15:08.368 } 00:15:08.368 } 00:15:08.368 ]' 00:15:08.368 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.626 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.626 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.626 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:08.626 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.626 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.626 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.626 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.884 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:15:08.884 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key3 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:09.819 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.077 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.335 request: 00:15:10.335 { 00:15:10.335 "name": "nvme0", 00:15:10.335 "trtype": "tcp", 00:15:10.335 "traddr": "10.0.0.3", 00:15:10.335 "adrfam": "ipv4", 00:15:10.335 "trsvcid": "4420", 00:15:10.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:10.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:10.335 "prchk_reftag": false, 00:15:10.335 "prchk_guard": false, 00:15:10.335 "hdgst": false, 00:15:10.335 "ddgst": false, 00:15:10.335 "dhchap_key": "key3", 00:15:10.335 "allow_unrecognized_csi": false, 00:15:10.335 "method": "bdev_nvme_attach_controller", 00:15:10.335 "req_id": 1 00:15:10.335 } 00:15:10.335 Got JSON-RPC error response 00:15:10.335 response: 00:15:10.335 { 00:15:10.335 "code": -5, 00:15:10.335 "message": "Input/output error" 00:15:10.336 } 00:15:10.336 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:10.336 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:10.336 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:10.336 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:10.336 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:10.336 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:10.336 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:10.336 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.594 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.161 request: 00:15:11.161 { 00:15:11.161 "name": "nvme0", 00:15:11.161 "trtype": "tcp", 00:15:11.161 "traddr": "10.0.0.3", 00:15:11.161 "adrfam": "ipv4", 00:15:11.161 "trsvcid": "4420", 00:15:11.161 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:11.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:11.161 "prchk_reftag": false, 00:15:11.161 "prchk_guard": false, 00:15:11.161 "hdgst": false, 00:15:11.161 "ddgst": false, 00:15:11.161 "dhchap_key": "key3", 00:15:11.161 "allow_unrecognized_csi": false, 00:15:11.161 "method": "bdev_nvme_attach_controller", 00:15:11.161 "req_id": 1 00:15:11.161 } 00:15:11.161 Got JSON-RPC error response 00:15:11.161 response: 00:15:11.161 { 00:15:11.161 "code": -5, 00:15:11.161 "message": "Input/output error" 00:15:11.161 } 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.161 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.419 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.420 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.985 request: 00:15:11.985 { 00:15:11.985 "name": "nvme0", 00:15:11.985 "trtype": "tcp", 00:15:11.985 "traddr": "10.0.0.3", 00:15:11.985 "adrfam": "ipv4", 00:15:11.985 "trsvcid": "4420", 00:15:11.985 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:11.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:11.985 "prchk_reftag": false, 00:15:11.985 "prchk_guard": false, 00:15:11.985 "hdgst": false, 00:15:11.985 "ddgst": false, 00:15:11.985 "dhchap_key": "key0", 00:15:11.985 "dhchap_ctrlr_key": "key1", 00:15:11.985 "allow_unrecognized_csi": false, 00:15:11.985 "method": "bdev_nvme_attach_controller", 00:15:11.985 "req_id": 1 00:15:11.985 } 00:15:11.985 Got JSON-RPC error response 00:15:11.985 response: 00:15:11.985 { 00:15:11.985 "code": -5, 00:15:11.985 "message": "Input/output error" 00:15:11.985 } 00:15:11.985 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:11.985 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.985 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.985 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.985 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:11.985 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:11.985 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:12.243 nvme0n1 00:15:12.243 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:12.243 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.243 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:12.502 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.502 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.502 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.070 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 00:15:13.070 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.070 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.070 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.070 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:13.070 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:13.070 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:14.006 nvme0n1 00:15:14.006 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:14.006 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:14.006 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.574 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.574 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:14.574 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.574 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.574 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.574 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:14.574 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:14.574 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.832 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.832 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:15:14.832 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid 9ed3da8d-b493-400f-8e42-fb307dd7edcc -l 0 --dhchap-secret DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: --dhchap-ctrl-secret DHHC-1:03:MmVlZDMwMWFiZDk0MjBmYjk1MGU5ZTNiYzcyZmFhYzgxNmYzNDgwY2UwM2M1ODY4OWNlMzNmM2Q4MjQ3MmI4M/WM6Eg=: 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.399 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:15.987 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:16.563 request: 00:15:16.563 { 00:15:16.563 "name": "nvme0", 00:15:16.563 "trtype": "tcp", 00:15:16.563 "traddr": "10.0.0.3", 00:15:16.563 "adrfam": "ipv4", 00:15:16.563 "trsvcid": "4420", 00:15:16.563 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:16.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc", 00:15:16.563 "prchk_reftag": false, 00:15:16.563 "prchk_guard": false, 00:15:16.563 "hdgst": false, 00:15:16.563 "ddgst": false, 00:15:16.563 "dhchap_key": "key1", 00:15:16.563 "allow_unrecognized_csi": false, 00:15:16.563 "method": "bdev_nvme_attach_controller", 00:15:16.563 "req_id": 1 00:15:16.563 } 00:15:16.563 Got JSON-RPC error response 00:15:16.563 response: 00:15:16.563 { 00:15:16.563 "code": -5, 00:15:16.563 "message": "Input/output error" 00:15:16.563 } 00:15:16.563 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:16.563 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.563 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.563 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.563 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.563 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.563 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:17.501 nvme0n1 00:15:17.501 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:17.501 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.501 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:17.761 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.761 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.761 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.019 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:18.019 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.019 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.019 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.019 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:18.019 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:18.019 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:18.295 nvme0n1 00:15:18.295 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:18.295 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:18.295 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.554 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.554 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.554 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: '' 2s 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: ]] 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGQ5Y2JiNDRhZTkzMDg4ODM3ZWU1MDRmNWVkNDA3M2Ramner: 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:18.812 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: 2s 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:21.341 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:21.342 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: ]] 00:15:21.342 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDE5MGVlZGZhOGVmNDQ4MjhkMmM0MjU1MDY1NTI1NDdhZTJmYTNjNTZlMGYzZWZmoTdJnA==: 00:15:21.342 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:21.342 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:23.243 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:24.178 nvme0n1 00:15:24.178 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:24.178 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.178 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.178 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.178 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:24.178 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:24.746 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:24.746 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.746 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:25.006 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.006 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:25.006 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.006 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.006 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.006 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:25.006 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.573 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:25.833 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.833 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:25.833 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:26.400 request: 00:15:26.400 { 00:15:26.400 "name": "nvme0", 00:15:26.400 "dhchap_key": "key1", 00:15:26.400 "dhchap_ctrlr_key": "key3", 00:15:26.400 "method": "bdev_nvme_set_keys", 00:15:26.400 "req_id": 1 00:15:26.400 } 00:15:26.400 Got JSON-RPC error response 00:15:26.400 response: 00:15:26.400 { 00:15:26.400 "code": -13, 00:15:26.400 "message": "Permission denied" 00:15:26.400 } 00:15:26.400 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:26.400 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.400 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.400 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.400 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:26.400 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.400 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:26.659 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:26.659 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:27.591 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:27.591 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:27.591 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.158 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:28.158 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.158 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.158 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.158 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.158 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:28.158 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:28.158 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:29.093 nvme0n1 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:29.093 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:29.658 request: 00:15:29.658 { 00:15:29.658 "name": "nvme0", 00:15:29.658 "dhchap_key": "key2", 00:15:29.658 "dhchap_ctrlr_key": "key0", 00:15:29.658 "method": "bdev_nvme_set_keys", 00:15:29.658 "req_id": 1 00:15:29.658 } 00:15:29.658 Got JSON-RPC error response 00:15:29.658 response: 00:15:29.658 { 00:15:29.658 "code": -13, 00:15:29.658 "message": "Permission denied" 00:15:29.658 } 00:15:29.658 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:29.658 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.658 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.658 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.658 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:29.658 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.658 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:29.967 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:29.967 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:30.900 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:30.900 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:30.900 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67710 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67710 ']' 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67710 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67710 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:31.469 killing process with pid 67710 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67710' 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67710 00:15:31.469 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67710 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.036 rmmod nvme_tcp 00:15:32.036 rmmod nvme_fabrics 00:15:32.036 rmmod nvme_keyring 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70768 ']' 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70768 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70768 ']' 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70768 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70768 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.036 killing process with pid 70768 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70768' 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70768 00:15:32.036 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70768 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:32.295 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Ekf /tmp/spdk.key-sha256.hjh /tmp/spdk.key-sha384.Pis /tmp/spdk.key-sha512.wzh /tmp/spdk.key-sha512.0bi /tmp/spdk.key-sha384.aBV /tmp/spdk.key-sha256.zgQ '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:32.554 00:15:32.554 real 3m13.506s 00:15:32.554 user 7m42.987s 00:15:32.554 sys 0m30.742s 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.554 ************************************ 00:15:32.554 END TEST nvmf_auth_target 00:15:32.554 ************************************ 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.554 ************************************ 00:15:32.554 START TEST nvmf_bdevio_no_huge 00:15:32.554 ************************************ 00:15:32.554 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:32.554 * Looking for test storage... 00:15:32.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:32.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.814 --rc genhtml_branch_coverage=1 00:15:32.814 --rc genhtml_function_coverage=1 00:15:32.814 --rc genhtml_legend=1 00:15:32.814 --rc geninfo_all_blocks=1 00:15:32.814 --rc geninfo_unexecuted_blocks=1 00:15:32.814 00:15:32.814 ' 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:32.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.814 --rc genhtml_branch_coverage=1 00:15:32.814 --rc genhtml_function_coverage=1 00:15:32.814 --rc genhtml_legend=1 00:15:32.814 --rc geninfo_all_blocks=1 00:15:32.814 --rc geninfo_unexecuted_blocks=1 00:15:32.814 00:15:32.814 ' 00:15:32.814 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:32.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.815 --rc genhtml_branch_coverage=1 00:15:32.815 --rc genhtml_function_coverage=1 00:15:32.815 --rc genhtml_legend=1 00:15:32.815 --rc geninfo_all_blocks=1 00:15:32.815 --rc geninfo_unexecuted_blocks=1 00:15:32.815 00:15:32.815 ' 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:32.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.815 --rc genhtml_branch_coverage=1 00:15:32.815 --rc genhtml_function_coverage=1 00:15:32.815 --rc genhtml_legend=1 00:15:32.815 --rc geninfo_all_blocks=1 00:15:32.815 --rc geninfo_unexecuted_blocks=1 00:15:32.815 00:15:32.815 ' 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.815 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:32.815 Cannot find device "nvmf_init_br" 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:32.815 Cannot find device "nvmf_init_br2" 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:32.815 Cannot find device "nvmf_tgt_br" 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:32.815 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.815 Cannot find device "nvmf_tgt_br2" 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:32.816 Cannot find device "nvmf_init_br" 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:32.816 Cannot find device "nvmf_init_br2" 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:32.816 Cannot find device "nvmf_tgt_br" 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:32.816 Cannot find device "nvmf_tgt_br2" 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:32.816 Cannot find device "nvmf_br" 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:32.816 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:32.816 Cannot find device "nvmf_init_if" 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:33.075 Cannot find device "nvmf_init_if2" 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.075 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.075 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.075 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:33.075 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:33.075 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:33.075 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.075 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:33.075 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:33.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:15:33.075 00:15:33.075 --- 10.0.0.3 ping statistics --- 00:15:33.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.075 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:33.334 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:33.334 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:15:33.334 00:15:33.334 --- 10.0.0.4 ping statistics --- 00:15:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.334 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:33.334 00:15:33.334 --- 10.0.0.1 ping statistics --- 00:15:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.334 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:33.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:33.334 00:15:33.334 --- 10.0.0.2 ping statistics --- 00:15:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.334 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71417 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71417 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71417 ']' 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.334 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.334 [2024-12-09 04:05:15.135912] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:15:33.334 [2024-12-09 04:05:15.136030] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:33.595 [2024-12-09 04:05:15.306103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.595 [2024-12-09 04:05:15.394833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.595 [2024-12-09 04:05:15.394916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.595 [2024-12-09 04:05:15.394942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.595 [2024-12-09 04:05:15.394964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.595 [2024-12-09 04:05:15.394973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.595 [2024-12-09 04:05:15.397561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:33.595 [2024-12-09 04:05:15.397737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:33.595 [2024-12-09 04:05:15.397863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:33.595 [2024-12-09 04:05:15.397866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.595 [2024-12-09 04:05:15.404220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 [2024-12-09 04:05:16.225173] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 Malloc0 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 [2024-12-09 04:05:16.270702] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.530 { 00:15:34.530 "params": { 00:15:34.530 "name": "Nvme$subsystem", 00:15:34.530 "trtype": "$TEST_TRANSPORT", 00:15:34.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.530 "adrfam": "ipv4", 00:15:34.530 "trsvcid": "$NVMF_PORT", 00:15:34.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.530 "hdgst": ${hdgst:-false}, 00:15:34.530 "ddgst": ${ddgst:-false} 00:15:34.530 }, 00:15:34.530 "method": "bdev_nvme_attach_controller" 00:15:34.530 } 00:15:34.530 EOF 00:15:34.530 )") 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:34.530 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:34.530 "params": { 00:15:34.530 "name": "Nvme1", 00:15:34.530 "trtype": "tcp", 00:15:34.530 "traddr": "10.0.0.3", 00:15:34.530 "adrfam": "ipv4", 00:15:34.530 "trsvcid": "4420", 00:15:34.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.530 "hdgst": false, 00:15:34.530 "ddgst": false 00:15:34.530 }, 00:15:34.530 "method": "bdev_nvme_attach_controller" 00:15:34.530 }' 00:15:34.530 [2024-12-09 04:05:16.334735] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:15:34.530 [2024-12-09 04:05:16.334853] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71457 ] 00:15:34.788 [2024-12-09 04:05:16.497300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:34.788 [2024-12-09 04:05:16.588108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.788 [2024-12-09 04:05:16.588251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.788 [2024-12-09 04:05:16.588259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.788 [2024-12-09 04:05:16.603183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.047 I/O targets: 00:15:35.047 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:35.047 00:15:35.047 00:15:35.047 CUnit - A unit testing framework for C - Version 2.1-3 00:15:35.047 http://cunit.sourceforge.net/ 00:15:35.047 00:15:35.047 00:15:35.047 Suite: bdevio tests on: Nvme1n1 00:15:35.047 Test: blockdev write read block ...passed 00:15:35.047 Test: blockdev write zeroes read block ...passed 00:15:35.047 Test: blockdev write zeroes read no split ...passed 00:15:35.047 Test: blockdev write zeroes read split ...passed 00:15:35.047 Test: blockdev write zeroes read split partial ...passed 00:15:35.047 Test: blockdev reset ...[2024-12-09 04:05:16.862490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:35.047 [2024-12-09 04:05:16.862847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1155e90 (9): Bad file descriptor 00:15:35.047 [2024-12-09 04:05:16.879021] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:35.047 passed 00:15:35.047 Test: blockdev write read 8 blocks ...passed 00:15:35.047 Test: blockdev write read size > 128k ...passed 00:15:35.047 Test: blockdev write read invalid size ...passed 00:15:35.047 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:35.047 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:35.047 Test: blockdev write read max offset ...passed 00:15:35.047 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:35.047 Test: blockdev writev readv 8 blocks ...passed 00:15:35.047 Test: blockdev writev readv 30 x 1block ...passed 00:15:35.047 Test: blockdev writev readv block ...passed 00:15:35.047 Test: blockdev writev readv size > 128k ...passed 00:15:35.047 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:35.047 Test: blockdev comparev and writev ...[2024-12-09 04:05:16.887156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.047 [2024-12-09 04:05:16.887208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.887230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.047 [2024-12-09 04:05:16.887242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.887596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.047 [2024-12-09 04:05:16.887626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.887644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.047 [2024-12-09 04:05:16.887654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.888073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.047 [2024-12-09 04:05:16.888101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.888119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.047 [2024-12-09 04:05:16.888130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.888498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.047 [2024-12-09 04:05:16.888528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.888546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:35.047 [2024-12-09 04:05:16.888557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:35.047 passed 00:15:35.047 Test: blockdev nvme passthru rw ...passed 00:15:35.047 Test: blockdev nvme passthru vendor specific ...[2024-12-09 04:05:16.889374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:35.047 [2024-12-09 04:05:16.889401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.889508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:35.047 [2024-12-09 04:05:16.889532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.889669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:35.047 [2024-12-09 04:05:16.889691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:35.047 [2024-12-09 04:05:16.889796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:35.047 [2024-12-09 04:05:16.889817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:35.047 passed 00:15:35.047 Test: blockdev nvme admin passthru ...passed 00:15:35.047 Test: blockdev copy ...passed 00:15:35.047 00:15:35.047 Run Summary: Type Total Ran Passed Failed Inactive 00:15:35.047 suites 1 1 n/a 0 0 00:15:35.047 tests 23 23 23 0 0 00:15:35.047 asserts 152 152 152 0 n/a 00:15:35.047 00:15:35.047 Elapsed time = 0.168 seconds 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.613 rmmod nvme_tcp 00:15:35.613 rmmod nvme_fabrics 00:15:35.613 rmmod nvme_keyring 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71417 ']' 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71417 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71417 ']' 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71417 00:15:35.613 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:35.614 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.614 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71417 00:15:35.614 killing process with pid 71417 00:15:35.614 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:35.614 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:35.614 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71417' 00:15:35.614 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71417 00:15:35.614 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71417 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:36.283 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:36.283 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:36.283 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:36.283 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.283 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.283 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:36.283 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:36.284 00:15:36.284 real 0m3.721s 00:15:36.284 user 0m11.540s 00:15:36.284 sys 0m1.529s 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.284 ************************************ 00:15:36.284 END TEST nvmf_bdevio_no_huge 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:36.284 ************************************ 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:36.284 ************************************ 00:15:36.284 START TEST nvmf_tls 00:15:36.284 ************************************ 00:15:36.284 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:36.542 * Looking for test storage... 00:15:36.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:36.542 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:36.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.543 --rc genhtml_branch_coverage=1 00:15:36.543 --rc genhtml_function_coverage=1 00:15:36.543 --rc genhtml_legend=1 00:15:36.543 --rc geninfo_all_blocks=1 00:15:36.543 --rc geninfo_unexecuted_blocks=1 00:15:36.543 00:15:36.543 ' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:36.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.543 --rc genhtml_branch_coverage=1 00:15:36.543 --rc genhtml_function_coverage=1 00:15:36.543 --rc genhtml_legend=1 00:15:36.543 --rc geninfo_all_blocks=1 00:15:36.543 --rc geninfo_unexecuted_blocks=1 00:15:36.543 00:15:36.543 ' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:36.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.543 --rc genhtml_branch_coverage=1 00:15:36.543 --rc genhtml_function_coverage=1 00:15:36.543 --rc genhtml_legend=1 00:15:36.543 --rc geninfo_all_blocks=1 00:15:36.543 --rc geninfo_unexecuted_blocks=1 00:15:36.543 00:15:36.543 ' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:36.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.543 --rc genhtml_branch_coverage=1 00:15:36.543 --rc genhtml_function_coverage=1 00:15:36.543 --rc genhtml_legend=1 00:15:36.543 --rc geninfo_all_blocks=1 00:15:36.543 --rc geninfo_unexecuted_blocks=1 00:15:36.543 00:15:36.543 ' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:36.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:36.543 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:36.544 Cannot find device "nvmf_init_br" 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:36.544 Cannot find device "nvmf_init_br2" 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:36.544 Cannot find device "nvmf_tgt_br" 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.544 Cannot find device "nvmf_tgt_br2" 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:36.544 Cannot find device "nvmf_init_br" 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:36.544 Cannot find device "nvmf_init_br2" 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:36.544 Cannot find device "nvmf_tgt_br" 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:36.544 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:36.544 Cannot find device "nvmf_tgt_br2" 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:36.802 Cannot find device "nvmf_br" 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:36.802 Cannot find device "nvmf_init_if" 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:36.802 Cannot find device "nvmf_init_if2" 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:36.802 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:37.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:37.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:15:37.061 00:15:37.061 --- 10.0.0.3 ping statistics --- 00:15:37.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.061 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:37.061 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:37.061 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:15:37.061 00:15:37.061 --- 10.0.0.4 ping statistics --- 00:15:37.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.061 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:37.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:15:37.061 00:15:37.061 --- 10.0.0.1 ping statistics --- 00:15:37.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.061 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:37.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:15:37.061 00:15:37.061 --- 10.0.0.2 ping statistics --- 00:15:37.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.061 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71691 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71691 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71691 ']' 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.061 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.061 [2024-12-09 04:05:18.858673] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:15:37.061 [2024-12-09 04:05:18.859017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.318 [2024-12-09 04:05:19.015511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.318 [2024-12-09 04:05:19.095866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.318 [2024-12-09 04:05:19.095940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.318 [2024-12-09 04:05:19.095966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.318 [2024-12-09 04:05:19.095977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.318 [2024-12-09 04:05:19.095986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.318 [2024-12-09 04:05:19.096561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:37.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:38.140 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.141 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:38.141 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:38.141 true 00:15:38.397 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:38.397 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:38.397 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:38.397 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:38.397 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:38.962 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:38.962 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:39.219 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:39.219 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:39.219 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:39.476 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:39.476 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:39.734 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:39.734 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:39.734 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:39.734 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:39.991 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:39.991 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:39.991 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:40.249 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:40.249 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:40.507 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:40.507 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:40.507 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:40.765 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:40.765 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:41.024 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:41.283 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:41.283 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:41.283 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Hif2z2dKA3 00:15:41.283 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:41.283 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.G0qa89hRkC 00:15:41.283 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:41.283 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:41.283 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Hif2z2dKA3 00:15:41.283 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.G0qa89hRkC 00:15:41.283 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:41.542 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:41.802 [2024-12-09 04:05:23.547287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:41.802 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Hif2z2dKA3 00:15:41.802 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Hif2z2dKA3 00:15:41.802 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:42.063 [2024-12-09 04:05:23.898778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.063 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:42.326 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:42.585 [2024-12-09 04:05:24.366875] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:42.585 [2024-12-09 04:05:24.367250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:42.585 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:42.844 malloc0 00:15:42.844 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:43.102 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Hif2z2dKA3 00:15:43.361 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:43.619 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Hif2z2dKA3 00:15:55.819 Initializing NVMe Controllers 00:15:55.819 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.819 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:55.819 Initialization complete. Launching workers. 00:15:55.819 ======================================================== 00:15:55.819 Latency(us) 00:15:55.819 Device Information : IOPS MiB/s Average min max 00:15:55.819 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11026.10 43.07 5805.80 1000.76 12704.44 00:15:55.819 ======================================================== 00:15:55.819 Total : 11026.10 43.07 5805.80 1000.76 12704.44 00:15:55.819 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hif2z2dKA3 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Hif2z2dKA3 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71935 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71935 /var/tmp/bdevperf.sock 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71935 ']' 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.819 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.819 [2024-12-09 04:05:35.670019] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:15:55.819 [2024-12-09 04:05:35.670162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71935 ] 00:15:55.819 [2024-12-09 04:05:35.819686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.819 [2024-12-09 04:05:35.881682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.819 [2024-12-09 04:05:35.959825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:55.819 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.819 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:55.819 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hif2z2dKA3 00:15:55.820 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:55.820 [2024-12-09 04:05:37.069852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:55.820 TLSTESTn1 00:15:55.820 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:55.820 Running I/O for 10 seconds... 00:15:57.409 4565.00 IOPS, 17.83 MiB/s [2024-12-09T04:05:40.736Z] 4589.50 IOPS, 17.93 MiB/s [2024-12-09T04:05:41.672Z] 4591.00 IOPS, 17.93 MiB/s [2024-12-09T04:05:42.609Z] 4604.00 IOPS, 17.98 MiB/s [2024-12-09T04:05:43.545Z] 4627.00 IOPS, 18.07 MiB/s [2024-12-09T04:05:44.483Z] 4649.50 IOPS, 18.16 MiB/s [2024-12-09T04:05:45.424Z] 4676.29 IOPS, 18.27 MiB/s [2024-12-09T04:05:46.356Z] 4697.38 IOPS, 18.35 MiB/s [2024-12-09T04:05:47.730Z] 4708.11 IOPS, 18.39 MiB/s [2024-12-09T04:05:47.730Z] 4706.90 IOPS, 18.39 MiB/s 00:16:05.780 Latency(us) 00:16:05.780 [2024-12-09T04:05:47.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.780 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:05.780 Verification LBA range: start 0x0 length 0x2000 00:16:05.780 TLSTESTn1 : 10.02 4712.40 18.41 0.00 0.00 27112.76 5451.40 20733.21 00:16:05.780 [2024-12-09T04:05:47.730Z] =================================================================================================================== 00:16:05.780 [2024-12-09T04:05:47.730Z] Total : 4712.40 18.41 0.00 0.00 27112.76 5451.40 20733.21 00:16:05.780 { 00:16:05.780 "results": [ 00:16:05.780 { 00:16:05.780 "job": "TLSTESTn1", 00:16:05.780 "core_mask": "0x4", 00:16:05.780 "workload": "verify", 00:16:05.780 "status": "finished", 00:16:05.780 "verify_range": { 00:16:05.780 "start": 0, 00:16:05.780 "length": 8192 00:16:05.780 }, 00:16:05.780 "queue_depth": 128, 00:16:05.780 "io_size": 4096, 00:16:05.780 "runtime": 10.015288, 00:16:05.780 "iops": 4712.395689469939, 00:16:05.780 "mibps": 18.407795661991948, 00:16:05.780 "io_failed": 0, 00:16:05.780 "io_timeout": 0, 00:16:05.780 "avg_latency_us": 27112.764906733235, 00:16:05.780 "min_latency_us": 5451.403636363636, 00:16:05.780 "max_latency_us": 20733.20727272727 00:16:05.780 } 00:16:05.780 ], 00:16:05.780 "core_count": 1 00:16:05.780 } 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71935 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71935 ']' 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71935 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71935 00:16:05.780 killing process with pid 71935 00:16:05.780 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.780 00:16:05.780 Latency(us) 00:16:05.780 [2024-12-09T04:05:47.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.780 [2024-12-09T04:05:47.730Z] =================================================================================================================== 00:16:05.780 [2024-12-09T04:05:47.730Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:05.780 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71935' 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71935 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71935 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0qa89hRkC 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0qa89hRkC 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0qa89hRkC 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.G0qa89hRkC 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72064 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72064 /var/tmp/bdevperf.sock 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72064 ']' 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.781 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.781 [2024-12-09 04:05:47.722264] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:05.781 [2024-12-09 04:05:47.722409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72064 ] 00:16:06.038 [2024-12-09 04:05:47.870538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.038 [2024-12-09 04:05:47.943307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.298 [2024-12-09 04:05:48.025241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.864 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.864 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:06.864 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G0qa89hRkC 00:16:07.122 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:07.381 [2024-12-09 04:05:49.224298] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:07.381 [2024-12-09 04:05:49.230223] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-12-09 04:05:49.230391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d05030 (107): Transport endpoint is not connected 00:16:07.381 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:07.381 [2024-12-09 04:05:49.231363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d05030 (9): Bad file descriptor 00:16:07.381 [2024-12-09 04:05:49.232360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:07.381 [2024-12-09 04:05:49.232385] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:07.381 [2024-12-09 04:05:49.232397] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:07.381 [2024-12-09 04:05:49.232413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:07.381 request: 00:16:07.381 { 00:16:07.381 "name": "TLSTEST", 00:16:07.381 "trtype": "tcp", 00:16:07.381 "traddr": "10.0.0.3", 00:16:07.381 "adrfam": "ipv4", 00:16:07.381 "trsvcid": "4420", 00:16:07.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.381 "prchk_reftag": false, 00:16:07.381 "prchk_guard": false, 00:16:07.381 "hdgst": false, 00:16:07.381 "ddgst": false, 00:16:07.381 "psk": "key0", 00:16:07.381 "allow_unrecognized_csi": false, 00:16:07.381 "method": "bdev_nvme_attach_controller", 00:16:07.381 "req_id": 1 00:16:07.381 } 00:16:07.381 Got JSON-RPC error response 00:16:07.381 response: 00:16:07.381 { 00:16:07.381 "code": -5, 00:16:07.381 "message": "Input/output error" 00:16:07.381 } 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72064 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72064 ']' 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72064 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72064 00:16:07.381 killing process with pid 72064 00:16:07.381 Received shutdown signal, test time was about 10.000000 seconds 00:16:07.381 00:16:07.381 Latency(us) 00:16:07.381 [2024-12-09T04:05:49.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.381 [2024-12-09T04:05:49.331Z] =================================================================================================================== 00:16:07.381 [2024-12-09T04:05:49.331Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72064' 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72064 00:16:07.381 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72064 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Hif2z2dKA3 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Hif2z2dKA3 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Hif2z2dKA3 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Hif2z2dKA3 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72098 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72098 /var/tmp/bdevperf.sock 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72098 ']' 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:07.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.949 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.949 [2024-12-09 04:05:49.657872] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:07.949 [2024-12-09 04:05:49.658000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72098 ] 00:16:07.949 [2024-12-09 04:05:49.805815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.949 [2024-12-09 04:05:49.884899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.208 [2024-12-09 04:05:49.962462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:08.776 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.776 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:08.776 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hif2z2dKA3 00:16:09.035 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:09.299 [2024-12-09 04:05:51.138774] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.299 [2024-12-09 04:05:51.150453] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:09.299 [2024-12-09 04:05:51.150659] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:09.299 [2024-12-09 04:05:51.150750] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:09.299 [2024-12-09 04:05:51.150796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd43030 (107): Transport endpoint is not connected 00:16:09.299 [2024-12-09 04:05:51.151787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd43030 (9): Bad file descriptor 00:16:09.299 [2024-12-09 04:05:51.152784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:09.299 [2024-12-09 04:05:51.152804] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:09.299 [2024-12-09 04:05:51.152831] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:09.299 [2024-12-09 04:05:51.152846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:09.299 request: 00:16:09.299 { 00:16:09.299 "name": "TLSTEST", 00:16:09.299 "trtype": "tcp", 00:16:09.299 "traddr": "10.0.0.3", 00:16:09.299 "adrfam": "ipv4", 00:16:09.299 "trsvcid": "4420", 00:16:09.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.299 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:09.299 "prchk_reftag": false, 00:16:09.299 "prchk_guard": false, 00:16:09.299 "hdgst": false, 00:16:09.299 "ddgst": false, 00:16:09.299 "psk": "key0", 00:16:09.299 "allow_unrecognized_csi": false, 00:16:09.299 "method": "bdev_nvme_attach_controller", 00:16:09.299 "req_id": 1 00:16:09.299 } 00:16:09.299 Got JSON-RPC error response 00:16:09.300 response: 00:16:09.300 { 00:16:09.300 "code": -5, 00:16:09.300 "message": "Input/output error" 00:16:09.300 } 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72098 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72098 ']' 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72098 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72098 00:16:09.300 killing process with pid 72098 00:16:09.300 Received shutdown signal, test time was about 10.000000 seconds 00:16:09.300 00:16:09.300 Latency(us) 00:16:09.300 [2024-12-09T04:05:51.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.300 [2024-12-09T04:05:51.250Z] =================================================================================================================== 00:16:09.300 [2024-12-09T04:05:51.250Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72098' 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72098 00:16:09.300 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72098 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hif2z2dKA3 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hif2z2dKA3 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:09.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hif2z2dKA3 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Hif2z2dKA3 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72131 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72131 /var/tmp/bdevperf.sock 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72131 ']' 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.582 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:09.582 [2024-12-09 04:05:51.518883] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:09.582 [2024-12-09 04:05:51.519156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72131 ] 00:16:09.840 [2024-12-09 04:05:51.659915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.840 [2024-12-09 04:05:51.736483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.099 [2024-12-09 04:05:51.815083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.666 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.666 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:10.666 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hif2z2dKA3 00:16:10.925 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:11.184 [2024-12-09 04:05:52.901006] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:11.184 [2024-12-09 04:05:52.910673] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:11.184 [2024-12-09 04:05:52.911358] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:11.184 [2024-12-09 04:05:52.911757] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:11.184 [2024-12-09 04:05:52.912524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128030 (107): Transport endpoint is not connected 00:16:11.184 [2024-12-09 04:05:52.913517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128030 (9): Bad file descriptor 00:16:11.184 [2024-12-09 04:05:52.914514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:11.184 [2024-12-09 04:05:52.914541] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:11.184 [2024-12-09 04:05:52.914553] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:11.184 [2024-12-09 04:05:52.914571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:11.184 request: 00:16:11.184 { 00:16:11.184 "name": "TLSTEST", 00:16:11.184 "trtype": "tcp", 00:16:11.184 "traddr": "10.0.0.3", 00:16:11.184 "adrfam": "ipv4", 00:16:11.184 "trsvcid": "4420", 00:16:11.184 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:11.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:11.184 "prchk_reftag": false, 00:16:11.184 "prchk_guard": false, 00:16:11.184 "hdgst": false, 00:16:11.184 "ddgst": false, 00:16:11.184 "psk": "key0", 00:16:11.184 "allow_unrecognized_csi": false, 00:16:11.184 "method": "bdev_nvme_attach_controller", 00:16:11.184 "req_id": 1 00:16:11.184 } 00:16:11.184 Got JSON-RPC error response 00:16:11.184 response: 00:16:11.184 { 00:16:11.184 "code": -5, 00:16:11.184 "message": "Input/output error" 00:16:11.184 } 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72131 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72131 ']' 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72131 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72131 00:16:11.184 killing process with pid 72131 00:16:11.184 Received shutdown signal, test time was about 10.000000 seconds 00:16:11.184 00:16:11.184 Latency(us) 00:16:11.184 [2024-12-09T04:05:53.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.184 [2024-12-09T04:05:53.134Z] =================================================================================================================== 00:16:11.184 [2024-12-09T04:05:53.134Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72131' 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72131 00:16:11.184 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72131 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:11.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72161 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72161 /var/tmp/bdevperf.sock 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72161 ']' 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.444 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.445 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.445 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.445 [2024-12-09 04:05:53.292804] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:11.445 [2024-12-09 04:05:53.293303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72161 ] 00:16:11.704 [2024-12-09 04:05:53.440889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.704 [2024-12-09 04:05:53.522067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.704 [2024-12-09 04:05:53.599420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:12.640 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.640 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:12.640 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:12.640 [2024-12-09 04:05:54.547016] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:12.640 [2024-12-09 04:05:54.547954] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:12.640 request: 00:16:12.640 { 00:16:12.640 "name": "key0", 00:16:12.640 "path": "", 00:16:12.640 "method": "keyring_file_add_key", 00:16:12.640 "req_id": 1 00:16:12.640 } 00:16:12.640 Got JSON-RPC error response 00:16:12.640 response: 00:16:12.640 { 00:16:12.640 "code": -1, 00:16:12.640 "message": "Operation not permitted" 00:16:12.640 } 00:16:12.640 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:12.898 [2024-12-09 04:05:54.771193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:12.898 [2024-12-09 04:05:54.771798] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:12.898 request: 00:16:12.898 { 00:16:12.898 "name": "TLSTEST", 00:16:12.898 "trtype": "tcp", 00:16:12.898 "traddr": "10.0.0.3", 00:16:12.898 "adrfam": "ipv4", 00:16:12.898 "trsvcid": "4420", 00:16:12.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:12.899 "prchk_reftag": false, 00:16:12.899 "prchk_guard": false, 00:16:12.899 "hdgst": false, 00:16:12.899 "ddgst": false, 00:16:12.899 "psk": "key0", 00:16:12.899 "allow_unrecognized_csi": false, 00:16:12.899 "method": "bdev_nvme_attach_controller", 00:16:12.899 "req_id": 1 00:16:12.899 } 00:16:12.899 Got JSON-RPC error response 00:16:12.899 response: 00:16:12.899 { 00:16:12.899 "code": -126, 00:16:12.899 "message": "Required key not available" 00:16:12.899 } 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72161 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72161 ']' 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72161 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72161 00:16:12.899 killing process with pid 72161 00:16:12.899 Received shutdown signal, test time was about 10.000000 seconds 00:16:12.899 00:16:12.899 Latency(us) 00:16:12.899 [2024-12-09T04:05:54.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.899 [2024-12-09T04:05:54.849Z] =================================================================================================================== 00:16:12.899 [2024-12-09T04:05:54.849Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72161' 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72161 00:16:12.899 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72161 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71691 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71691 ']' 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71691 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.157 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71691 00:16:13.422 killing process with pid 71691 00:16:13.422 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:13.422 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:13.422 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71691' 00:16:13.422 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71691 00:16:13.422 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71691 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.psT9wy31fj 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.psT9wy31fj 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72205 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72205 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72205 ']' 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.681 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:13.681 [2024-12-09 04:05:55.500957] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:13.681 [2024-12-09 04:05:55.501334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.939 [2024-12-09 04:05:55.646381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.939 [2024-12-09 04:05:55.702632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.939 [2024-12-09 04:05:55.702697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.939 [2024-12-09 04:05:55.702724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.939 [2024-12-09 04:05:55.702732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.939 [2024-12-09 04:05:55.702738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.939 [2024-12-09 04:05:55.703139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.939 [2024-12-09 04:05:55.775990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:13.939 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.939 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:13.939 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:13.940 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:13.940 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:14.197 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.197 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.psT9wy31fj 00:16:14.197 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.psT9wy31fj 00:16:14.197 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:14.455 [2024-12-09 04:05:56.175993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.455 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:14.713 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:14.971 [2024-12-09 04:05:56.724073] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:14.971 [2024-12-09 04:05:56.724594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:14.971 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:15.229 malloc0 00:16:15.229 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:15.487 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:15.745 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:16.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.004 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.psT9wy31fj 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.psT9wy31fj 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72253 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72253 /var/tmp/bdevperf.sock 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72253 ']' 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.005 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.005 [2024-12-09 04:05:57.747588] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:16.005 [2024-12-09 04:05:57.747975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72253 ] 00:16:16.005 [2024-12-09 04:05:57.897981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.266 [2024-12-09 04:05:57.986278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.266 [2024-12-09 04:05:58.063964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:16.831 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.831 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:16.831 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:17.089 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:17.347 [2024-12-09 04:05:59.230622] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:17.605 TLSTESTn1 00:16:17.605 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:17.605 Running I/O for 10 seconds... 00:16:19.473 4297.00 IOPS, 16.79 MiB/s [2024-12-09T04:06:02.796Z] 4332.00 IOPS, 16.92 MiB/s [2024-12-09T04:06:03.732Z] 4315.33 IOPS, 16.86 MiB/s [2024-12-09T04:06:04.666Z] 4352.50 IOPS, 17.00 MiB/s [2024-12-09T04:06:05.652Z] 4394.80 IOPS, 17.17 MiB/s [2024-12-09T04:06:06.587Z] 4438.17 IOPS, 17.34 MiB/s [2024-12-09T04:06:07.519Z] 4478.86 IOPS, 17.50 MiB/s [2024-12-09T04:06:08.452Z] 4511.38 IOPS, 17.62 MiB/s [2024-12-09T04:06:09.830Z] 4534.56 IOPS, 17.71 MiB/s [2024-12-09T04:06:09.830Z] 4551.90 IOPS, 17.78 MiB/s 00:16:27.880 Latency(us) 00:16:27.880 [2024-12-09T04:06:09.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.880 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:27.880 Verification LBA range: start 0x0 length 0x2000 00:16:27.880 TLSTESTn1 : 10.01 4558.29 17.81 0.00 0.00 28031.93 5302.46 23592.96 00:16:27.880 [2024-12-09T04:06:09.830Z] =================================================================================================================== 00:16:27.880 [2024-12-09T04:06:09.830Z] Total : 4558.29 17.81 0.00 0.00 28031.93 5302.46 23592.96 00:16:27.880 { 00:16:27.880 "results": [ 00:16:27.880 { 00:16:27.880 "job": "TLSTESTn1", 00:16:27.880 "core_mask": "0x4", 00:16:27.880 "workload": "verify", 00:16:27.880 "status": "finished", 00:16:27.880 "verify_range": { 00:16:27.880 "start": 0, 00:16:27.881 "length": 8192 00:16:27.881 }, 00:16:27.881 "queue_depth": 128, 00:16:27.881 "io_size": 4096, 00:16:27.881 "runtime": 10.013851, 00:16:27.881 "iops": 4558.286317621462, 00:16:27.881 "mibps": 17.805805928208837, 00:16:27.881 "io_failed": 0, 00:16:27.881 "io_timeout": 0, 00:16:27.881 "avg_latency_us": 28031.928722620323, 00:16:27.881 "min_latency_us": 5302.458181818181, 00:16:27.881 "max_latency_us": 23592.96 00:16:27.881 } 00:16:27.881 ], 00:16:27.881 "core_count": 1 00:16:27.881 } 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72253 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72253 ']' 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72253 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72253 00:16:27.881 killing process with pid 72253 00:16:27.881 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.881 00:16:27.881 Latency(us) 00:16:27.881 [2024-12-09T04:06:09.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.881 [2024-12-09T04:06:09.831Z] =================================================================================================================== 00:16:27.881 [2024-12-09T04:06:09.831Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72253' 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72253 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72253 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.psT9wy31fj 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.psT9wy31fj 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.psT9wy31fj 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.psT9wy31fj 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.psT9wy31fj 00:16:27.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72389 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72389 /var/tmp/bdevperf.sock 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72389 ']' 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.881 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.881 [2024-12-09 04:06:09.823492] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:27.881 [2024-12-09 04:06:09.823617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72389 ] 00:16:28.140 [2024-12-09 04:06:09.970781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.140 [2024-12-09 04:06:10.041817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.398 [2024-12-09 04:06:10.118985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.965 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.965 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:28.965 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:29.223 [2024-12-09 04:06:11.079697] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.psT9wy31fj': 0100666 00:16:29.223 [2024-12-09 04:06:11.080256] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:29.223 request: 00:16:29.223 { 00:16:29.223 "name": "key0", 00:16:29.223 "path": "/tmp/tmp.psT9wy31fj", 00:16:29.223 "method": "keyring_file_add_key", 00:16:29.223 "req_id": 1 00:16:29.223 } 00:16:29.223 Got JSON-RPC error response 00:16:29.223 response: 00:16:29.223 { 00:16:29.223 "code": -1, 00:16:29.223 "message": "Operation not permitted" 00:16:29.223 } 00:16:29.223 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:29.482 [2024-12-09 04:06:11.355859] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.482 [2024-12-09 04:06:11.356131] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:29.482 request: 00:16:29.482 { 00:16:29.482 "name": "TLSTEST", 00:16:29.482 "trtype": "tcp", 00:16:29.482 "traddr": "10.0.0.3", 00:16:29.482 "adrfam": "ipv4", 00:16:29.482 "trsvcid": "4420", 00:16:29.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.482 "prchk_reftag": false, 00:16:29.482 "prchk_guard": false, 00:16:29.482 "hdgst": false, 00:16:29.482 "ddgst": false, 00:16:29.482 "psk": "key0", 00:16:29.482 "allow_unrecognized_csi": false, 00:16:29.482 "method": "bdev_nvme_attach_controller", 00:16:29.482 "req_id": 1 00:16:29.482 } 00:16:29.482 Got JSON-RPC error response 00:16:29.482 response: 00:16:29.482 { 00:16:29.482 "code": -126, 00:16:29.482 "message": "Required key not available" 00:16:29.482 } 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72389 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72389 ']' 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72389 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72389 00:16:29.482 killing process with pid 72389 00:16:29.482 Received shutdown signal, test time was about 10.000000 seconds 00:16:29.482 00:16:29.482 Latency(us) 00:16:29.482 [2024-12-09T04:06:11.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.482 [2024-12-09T04:06:11.432Z] =================================================================================================================== 00:16:29.482 [2024-12-09T04:06:11.432Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72389' 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72389 00:16:29.482 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72389 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72205 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72205 ']' 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72205 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.741 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72205 00:16:30.001 killing process with pid 72205 00:16:30.001 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:30.001 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:30.001 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72205' 00:16:30.001 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72205 00:16:30.001 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72205 00:16:30.259 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:30.259 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.259 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.259 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72428 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72428 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72428 ']' 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.260 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.260 [2024-12-09 04:06:12.025301] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:30.260 [2024-12-09 04:06:12.025388] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.260 [2024-12-09 04:06:12.167909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.518 [2024-12-09 04:06:12.231329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.518 [2024-12-09 04:06:12.231396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.518 [2024-12-09 04:06:12.231407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.518 [2024-12-09 04:06:12.231415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.518 [2024-12-09 04:06:12.231423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.518 [2024-12-09 04:06:12.231813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.518 [2024-12-09 04:06:12.305785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:30.518 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.518 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.psT9wy31fj 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.psT9wy31fj 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.psT9wy31fj 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.psT9wy31fj 00:16:30.519 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:30.778 [2024-12-09 04:06:12.696484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.778 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:31.342 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:31.342 [2024-12-09 04:06:13.252591] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:31.343 [2024-12-09 04:06:13.252863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:31.343 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:31.602 malloc0 00:16:31.602 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:31.860 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:32.118 [2024-12-09 04:06:13.968337] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.psT9wy31fj': 0100666 00:16:32.118 [2024-12-09 04:06:13.968380] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:32.118 request: 00:16:32.118 { 00:16:32.118 "name": "key0", 00:16:32.118 "path": "/tmp/tmp.psT9wy31fj", 00:16:32.118 "method": "keyring_file_add_key", 00:16:32.118 "req_id": 1 00:16:32.118 } 00:16:32.118 Got JSON-RPC error response 00:16:32.118 response: 00:16:32.118 { 00:16:32.118 "code": -1, 00:16:32.118 "message": "Operation not permitted" 00:16:32.118 } 00:16:32.118 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:32.376 [2024-12-09 04:06:14.180388] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:32.376 [2024-12-09 04:06:14.180455] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:32.376 request: 00:16:32.376 { 00:16:32.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.376 "host": "nqn.2016-06.io.spdk:host1", 00:16:32.376 "psk": "key0", 00:16:32.376 "method": "nvmf_subsystem_add_host", 00:16:32.376 "req_id": 1 00:16:32.376 } 00:16:32.376 Got JSON-RPC error response 00:16:32.376 response: 00:16:32.376 { 00:16:32.376 "code": -32603, 00:16:32.376 "message": "Internal error" 00:16:32.376 } 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72428 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72428 ']' 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72428 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72428 00:16:32.376 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:32.377 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:32.377 killing process with pid 72428 00:16:32.377 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72428' 00:16:32.377 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72428 00:16:32.377 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72428 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.psT9wy31fj 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72495 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72495 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72495 ']' 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.635 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.893 [2024-12-09 04:06:14.594195] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:32.893 [2024-12-09 04:06:14.594306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.893 [2024-12-09 04:06:14.743817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.893 [2024-12-09 04:06:14.799962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.893 [2024-12-09 04:06:14.800032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.893 [2024-12-09 04:06:14.800043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.893 [2024-12-09 04:06:14.800051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.893 [2024-12-09 04:06:14.800058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.893 [2024-12-09 04:06:14.800527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.152 [2024-12-09 04:06:14.876596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.psT9wy31fj 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.psT9wy31fj 00:16:33.720 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:33.978 [2024-12-09 04:06:15.761532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.978 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:34.236 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:34.495 [2024-12-09 04:06:16.281704] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:34.495 [2024-12-09 04:06:16.282397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:34.495 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:34.753 malloc0 00:16:34.753 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:35.011 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:35.269 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72545 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72545 /var/tmp/bdevperf.sock 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72545 ']' 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.527 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.527 [2024-12-09 04:06:17.313192] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:35.527 [2024-12-09 04:06:17.313319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72545 ] 00:16:35.527 [2024-12-09 04:06:17.460605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.785 [2024-12-09 04:06:17.548213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.785 [2024-12-09 04:06:17.625974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:36.353 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.353 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:36.353 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:36.612 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:36.871 [2024-12-09 04:06:18.690422] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:36.871 TLSTESTn1 00:16:36.871 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:37.439 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:37.439 "subsystems": [ 00:16:37.439 { 00:16:37.439 "subsystem": "keyring", 00:16:37.439 "config": [ 00:16:37.439 { 00:16:37.439 "method": "keyring_file_add_key", 00:16:37.439 "params": { 00:16:37.439 "name": "key0", 00:16:37.439 "path": "/tmp/tmp.psT9wy31fj" 00:16:37.439 } 00:16:37.439 } 00:16:37.439 ] 00:16:37.439 }, 00:16:37.439 { 00:16:37.439 "subsystem": "iobuf", 00:16:37.439 "config": [ 00:16:37.439 { 00:16:37.439 "method": "iobuf_set_options", 00:16:37.439 "params": { 00:16:37.439 "small_pool_count": 8192, 00:16:37.439 "large_pool_count": 1024, 00:16:37.439 "small_bufsize": 8192, 00:16:37.439 "large_bufsize": 135168, 00:16:37.439 "enable_numa": false 00:16:37.439 } 00:16:37.439 } 00:16:37.439 ] 00:16:37.439 }, 00:16:37.439 { 00:16:37.439 "subsystem": "sock", 00:16:37.439 "config": [ 00:16:37.439 { 00:16:37.439 "method": "sock_set_default_impl", 00:16:37.439 "params": { 00:16:37.439 "impl_name": "uring" 00:16:37.439 } 00:16:37.439 }, 00:16:37.439 { 00:16:37.439 "method": "sock_impl_set_options", 00:16:37.439 "params": { 00:16:37.439 "impl_name": "ssl", 00:16:37.439 "recv_buf_size": 4096, 00:16:37.439 "send_buf_size": 4096, 00:16:37.439 "enable_recv_pipe": true, 00:16:37.439 "enable_quickack": false, 00:16:37.439 "enable_placement_id": 0, 00:16:37.439 "enable_zerocopy_send_server": true, 00:16:37.439 "enable_zerocopy_send_client": false, 00:16:37.439 "zerocopy_threshold": 0, 00:16:37.439 "tls_version": 0, 00:16:37.439 "enable_ktls": false 00:16:37.439 } 00:16:37.439 }, 00:16:37.439 { 00:16:37.439 "method": "sock_impl_set_options", 00:16:37.439 "params": { 00:16:37.439 "impl_name": "posix", 00:16:37.439 "recv_buf_size": 2097152, 00:16:37.439 "send_buf_size": 2097152, 00:16:37.439 "enable_recv_pipe": true, 00:16:37.439 "enable_quickack": false, 00:16:37.439 "enable_placement_id": 0, 00:16:37.439 "enable_zerocopy_send_server": true, 00:16:37.439 "enable_zerocopy_send_client": false, 00:16:37.439 "zerocopy_threshold": 0, 00:16:37.439 "tls_version": 0, 00:16:37.439 "enable_ktls": false 00:16:37.439 } 00:16:37.439 }, 00:16:37.439 { 00:16:37.439 "method": "sock_impl_set_options", 00:16:37.439 "params": { 00:16:37.439 "impl_name": "uring", 00:16:37.439 "recv_buf_size": 2097152, 00:16:37.439 "send_buf_size": 2097152, 00:16:37.439 "enable_recv_pipe": true, 00:16:37.440 "enable_quickack": false, 00:16:37.440 "enable_placement_id": 0, 00:16:37.440 "enable_zerocopy_send_server": false, 00:16:37.440 "enable_zerocopy_send_client": false, 00:16:37.440 "zerocopy_threshold": 0, 00:16:37.440 "tls_version": 0, 00:16:37.440 "enable_ktls": false 00:16:37.440 } 00:16:37.440 } 00:16:37.440 ] 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "subsystem": "vmd", 00:16:37.440 "config": [] 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "subsystem": "accel", 00:16:37.440 "config": [ 00:16:37.440 { 00:16:37.440 "method": "accel_set_options", 00:16:37.440 "params": { 00:16:37.440 "small_cache_size": 128, 00:16:37.440 "large_cache_size": 16, 00:16:37.440 "task_count": 2048, 00:16:37.440 "sequence_count": 2048, 00:16:37.440 "buf_count": 2048 00:16:37.440 } 00:16:37.440 } 00:16:37.440 ] 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "subsystem": "bdev", 00:16:37.440 "config": [ 00:16:37.440 { 00:16:37.440 "method": "bdev_set_options", 00:16:37.440 "params": { 00:16:37.440 "bdev_io_pool_size": 65535, 00:16:37.440 "bdev_io_cache_size": 256, 00:16:37.440 "bdev_auto_examine": true, 00:16:37.440 "iobuf_small_cache_size": 128, 00:16:37.440 "iobuf_large_cache_size": 16 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "bdev_raid_set_options", 00:16:37.440 "params": { 00:16:37.440 "process_window_size_kb": 1024, 00:16:37.440 "process_max_bandwidth_mb_sec": 0 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "bdev_iscsi_set_options", 00:16:37.440 "params": { 00:16:37.440 "timeout_sec": 30 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "bdev_nvme_set_options", 00:16:37.440 "params": { 00:16:37.440 "action_on_timeout": "none", 00:16:37.440 "timeout_us": 0, 00:16:37.440 "timeout_admin_us": 0, 00:16:37.440 "keep_alive_timeout_ms": 10000, 00:16:37.440 "arbitration_burst": 0, 00:16:37.440 "low_priority_weight": 0, 00:16:37.440 "medium_priority_weight": 0, 00:16:37.440 "high_priority_weight": 0, 00:16:37.440 "nvme_adminq_poll_period_us": 10000, 00:16:37.440 "nvme_ioq_poll_period_us": 0, 00:16:37.440 "io_queue_requests": 0, 00:16:37.440 "delay_cmd_submit": true, 00:16:37.440 "transport_retry_count": 4, 00:16:37.440 "bdev_retry_count": 3, 00:16:37.440 "transport_ack_timeout": 0, 00:16:37.440 "ctrlr_loss_timeout_sec": 0, 00:16:37.440 "reconnect_delay_sec": 0, 00:16:37.440 "fast_io_fail_timeout_sec": 0, 00:16:37.440 "disable_auto_failback": false, 00:16:37.440 "generate_uuids": false, 00:16:37.440 "transport_tos": 0, 00:16:37.440 "nvme_error_stat": false, 00:16:37.440 "rdma_srq_size": 0, 00:16:37.440 "io_path_stat": false, 00:16:37.440 "allow_accel_sequence": false, 00:16:37.440 "rdma_max_cq_size": 0, 00:16:37.440 "rdma_cm_event_timeout_ms": 0, 00:16:37.440 "dhchap_digests": [ 00:16:37.440 "sha256", 00:16:37.440 "sha384", 00:16:37.440 "sha512" 00:16:37.440 ], 00:16:37.440 "dhchap_dhgroups": [ 00:16:37.440 "null", 00:16:37.440 "ffdhe2048", 00:16:37.440 "ffdhe3072", 00:16:37.440 "ffdhe4096", 00:16:37.440 "ffdhe6144", 00:16:37.440 "ffdhe8192" 00:16:37.440 ] 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "bdev_nvme_set_hotplug", 00:16:37.440 "params": { 00:16:37.440 "period_us": 100000, 00:16:37.440 "enable": false 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "bdev_malloc_create", 00:16:37.440 "params": { 00:16:37.440 "name": "malloc0", 00:16:37.440 "num_blocks": 8192, 00:16:37.440 "block_size": 4096, 00:16:37.440 "physical_block_size": 4096, 00:16:37.440 "uuid": "19c851e3-b129-421b-9dbb-f12be68e6b4c", 00:16:37.440 "optimal_io_boundary": 0, 00:16:37.440 "md_size": 0, 00:16:37.440 "dif_type": 0, 00:16:37.440 "dif_is_head_of_md": false, 00:16:37.440 "dif_pi_format": 0 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "bdev_wait_for_examine" 00:16:37.440 } 00:16:37.440 ] 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "subsystem": "nbd", 00:16:37.440 "config": [] 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "subsystem": "scheduler", 00:16:37.440 "config": [ 00:16:37.440 { 00:16:37.440 "method": "framework_set_scheduler", 00:16:37.440 "params": { 00:16:37.440 "name": "static" 00:16:37.440 } 00:16:37.440 } 00:16:37.440 ] 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "subsystem": "nvmf", 00:16:37.440 "config": [ 00:16:37.440 { 00:16:37.440 "method": "nvmf_set_config", 00:16:37.440 "params": { 00:16:37.440 "discovery_filter": "match_any", 00:16:37.440 "admin_cmd_passthru": { 00:16:37.440 "identify_ctrlr": false 00:16:37.440 }, 00:16:37.440 "dhchap_digests": [ 00:16:37.440 "sha256", 00:16:37.440 "sha384", 00:16:37.440 "sha512" 00:16:37.440 ], 00:16:37.440 "dhchap_dhgroups": [ 00:16:37.440 "null", 00:16:37.440 "ffdhe2048", 00:16:37.440 "ffdhe3072", 00:16:37.440 "ffdhe4096", 00:16:37.440 "ffdhe6144", 00:16:37.440 "ffdhe8192" 00:16:37.440 ] 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "nvmf_set_max_subsystems", 00:16:37.440 "params": { 00:16:37.440 "max_subsystems": 1024 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "nvmf_set_crdt", 00:16:37.440 "params": { 00:16:37.440 "crdt1": 0, 00:16:37.440 "crdt2": 0, 00:16:37.440 "crdt3": 0 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "nvmf_create_transport", 00:16:37.440 "params": { 00:16:37.440 "trtype": "TCP", 00:16:37.440 "max_queue_depth": 128, 00:16:37.440 "max_io_qpairs_per_ctrlr": 127, 00:16:37.440 "in_capsule_data_size": 4096, 00:16:37.440 "max_io_size": 131072, 00:16:37.440 "io_unit_size": 131072, 00:16:37.440 "max_aq_depth": 128, 00:16:37.440 "num_shared_buffers": 511, 00:16:37.440 "buf_cache_size": 4294967295, 00:16:37.440 "dif_insert_or_strip": false, 00:16:37.440 "zcopy": false, 00:16:37.440 "c2h_success": false, 00:16:37.440 "sock_priority": 0, 00:16:37.440 "abort_timeout_sec": 1, 00:16:37.440 "ack_timeout": 0, 00:16:37.440 "data_wr_pool_size": 0 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "nvmf_create_subsystem", 00:16:37.440 "params": { 00:16:37.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.440 "allow_any_host": false, 00:16:37.440 "serial_number": "SPDK00000000000001", 00:16:37.440 "model_number": "SPDK bdev Controller", 00:16:37.440 "max_namespaces": 10, 00:16:37.440 "min_cntlid": 1, 00:16:37.440 "max_cntlid": 65519, 00:16:37.440 "ana_reporting": false 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "nvmf_subsystem_add_host", 00:16:37.440 "params": { 00:16:37.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.440 "host": "nqn.2016-06.io.spdk:host1", 00:16:37.440 "psk": "key0" 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "nvmf_subsystem_add_ns", 00:16:37.440 "params": { 00:16:37.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.440 "namespace": { 00:16:37.440 "nsid": 1, 00:16:37.440 "bdev_name": "malloc0", 00:16:37.440 "nguid": "19C851E3B129421B9DBBF12BE68E6B4C", 00:16:37.440 "uuid": "19c851e3-b129-421b-9dbb-f12be68e6b4c", 00:16:37.440 "no_auto_visible": false 00:16:37.440 } 00:16:37.440 } 00:16:37.440 }, 00:16:37.440 { 00:16:37.440 "method": "nvmf_subsystem_add_listener", 00:16:37.440 "params": { 00:16:37.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.441 "listen_address": { 00:16:37.441 "trtype": "TCP", 00:16:37.441 "adrfam": "IPv4", 00:16:37.441 "traddr": "10.0.0.3", 00:16:37.441 "trsvcid": "4420" 00:16:37.441 }, 00:16:37.441 "secure_channel": true 00:16:37.441 } 00:16:37.441 } 00:16:37.441 ] 00:16:37.441 } 00:16:37.441 ] 00:16:37.441 }' 00:16:37.441 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:37.700 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:37.700 "subsystems": [ 00:16:37.700 { 00:16:37.700 "subsystem": "keyring", 00:16:37.700 "config": [ 00:16:37.700 { 00:16:37.700 "method": "keyring_file_add_key", 00:16:37.700 "params": { 00:16:37.700 "name": "key0", 00:16:37.700 "path": "/tmp/tmp.psT9wy31fj" 00:16:37.700 } 00:16:37.700 } 00:16:37.700 ] 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "subsystem": "iobuf", 00:16:37.700 "config": [ 00:16:37.700 { 00:16:37.700 "method": "iobuf_set_options", 00:16:37.700 "params": { 00:16:37.700 "small_pool_count": 8192, 00:16:37.700 "large_pool_count": 1024, 00:16:37.700 "small_bufsize": 8192, 00:16:37.700 "large_bufsize": 135168, 00:16:37.700 "enable_numa": false 00:16:37.700 } 00:16:37.700 } 00:16:37.700 ] 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "subsystem": "sock", 00:16:37.700 "config": [ 00:16:37.700 { 00:16:37.700 "method": "sock_set_default_impl", 00:16:37.700 "params": { 00:16:37.700 "impl_name": "uring" 00:16:37.700 } 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "method": "sock_impl_set_options", 00:16:37.700 "params": { 00:16:37.700 "impl_name": "ssl", 00:16:37.700 "recv_buf_size": 4096, 00:16:37.700 "send_buf_size": 4096, 00:16:37.700 "enable_recv_pipe": true, 00:16:37.700 "enable_quickack": false, 00:16:37.700 "enable_placement_id": 0, 00:16:37.700 "enable_zerocopy_send_server": true, 00:16:37.700 "enable_zerocopy_send_client": false, 00:16:37.700 "zerocopy_threshold": 0, 00:16:37.700 "tls_version": 0, 00:16:37.700 "enable_ktls": false 00:16:37.700 } 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "method": "sock_impl_set_options", 00:16:37.700 "params": { 00:16:37.700 "impl_name": "posix", 00:16:37.700 "recv_buf_size": 2097152, 00:16:37.700 "send_buf_size": 2097152, 00:16:37.700 "enable_recv_pipe": true, 00:16:37.700 "enable_quickack": false, 00:16:37.700 "enable_placement_id": 0, 00:16:37.700 "enable_zerocopy_send_server": true, 00:16:37.700 "enable_zerocopy_send_client": false, 00:16:37.700 "zerocopy_threshold": 0, 00:16:37.700 "tls_version": 0, 00:16:37.700 "enable_ktls": false 00:16:37.700 } 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "method": "sock_impl_set_options", 00:16:37.700 "params": { 00:16:37.700 "impl_name": "uring", 00:16:37.700 "recv_buf_size": 2097152, 00:16:37.700 "send_buf_size": 2097152, 00:16:37.700 "enable_recv_pipe": true, 00:16:37.700 "enable_quickack": false, 00:16:37.700 "enable_placement_id": 0, 00:16:37.700 "enable_zerocopy_send_server": false, 00:16:37.700 "enable_zerocopy_send_client": false, 00:16:37.700 "zerocopy_threshold": 0, 00:16:37.700 "tls_version": 0, 00:16:37.700 "enable_ktls": false 00:16:37.700 } 00:16:37.700 } 00:16:37.700 ] 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "subsystem": "vmd", 00:16:37.700 "config": [] 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "subsystem": "accel", 00:16:37.700 "config": [ 00:16:37.700 { 00:16:37.700 "method": "accel_set_options", 00:16:37.700 "params": { 00:16:37.700 "small_cache_size": 128, 00:16:37.700 "large_cache_size": 16, 00:16:37.700 "task_count": 2048, 00:16:37.700 "sequence_count": 2048, 00:16:37.700 "buf_count": 2048 00:16:37.700 } 00:16:37.700 } 00:16:37.700 ] 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "subsystem": "bdev", 00:16:37.700 "config": [ 00:16:37.700 { 00:16:37.700 "method": "bdev_set_options", 00:16:37.700 "params": { 00:16:37.700 "bdev_io_pool_size": 65535, 00:16:37.700 "bdev_io_cache_size": 256, 00:16:37.700 "bdev_auto_examine": true, 00:16:37.700 "iobuf_small_cache_size": 128, 00:16:37.700 "iobuf_large_cache_size": 16 00:16:37.700 } 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "method": "bdev_raid_set_options", 00:16:37.700 "params": { 00:16:37.700 "process_window_size_kb": 1024, 00:16:37.700 "process_max_bandwidth_mb_sec": 0 00:16:37.700 } 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "method": "bdev_iscsi_set_options", 00:16:37.700 "params": { 00:16:37.700 "timeout_sec": 30 00:16:37.700 } 00:16:37.700 }, 00:16:37.700 { 00:16:37.700 "method": "bdev_nvme_set_options", 00:16:37.700 "params": { 00:16:37.700 "action_on_timeout": "none", 00:16:37.700 "timeout_us": 0, 00:16:37.700 "timeout_admin_us": 0, 00:16:37.700 "keep_alive_timeout_ms": 10000, 00:16:37.700 "arbitration_burst": 0, 00:16:37.700 "low_priority_weight": 0, 00:16:37.700 "medium_priority_weight": 0, 00:16:37.700 "high_priority_weight": 0, 00:16:37.700 "nvme_adminq_poll_period_us": 10000, 00:16:37.700 "nvme_ioq_poll_period_us": 0, 00:16:37.700 "io_queue_requests": 512, 00:16:37.700 "delay_cmd_submit": true, 00:16:37.700 "transport_retry_count": 4, 00:16:37.700 "bdev_retry_count": 3, 00:16:37.700 "transport_ack_timeout": 0, 00:16:37.700 "ctrlr_loss_timeout_sec": 0, 00:16:37.700 "reconnect_delay_sec": 0, 00:16:37.700 "fast_io_fail_timeout_sec": 0, 00:16:37.701 "disable_auto_failback": false, 00:16:37.701 "generate_uuids": false, 00:16:37.701 "transport_tos": 0, 00:16:37.701 "nvme_error_stat": false, 00:16:37.701 "rdma_srq_size": 0, 00:16:37.701 "io_path_stat": false, 00:16:37.701 "allow_accel_sequence": false, 00:16:37.701 "rdma_max_cq_size": 0, 00:16:37.701 "rdma_cm_event_timeout_ms": 0, 00:16:37.701 "dhchap_digests": [ 00:16:37.701 "sha256", 00:16:37.701 "sha384", 00:16:37.701 "sha512" 00:16:37.701 ], 00:16:37.701 "dhchap_dhgroups": [ 00:16:37.701 "null", 00:16:37.701 "ffdhe2048", 00:16:37.701 "ffdhe3072", 00:16:37.701 "ffdhe4096", 00:16:37.701 "ffdhe6144", 00:16:37.701 "ffdhe8192" 00:16:37.701 ] 00:16:37.701 } 00:16:37.701 }, 00:16:37.701 { 00:16:37.701 "method": "bdev_nvme_attach_controller", 00:16:37.701 "params": { 00:16:37.701 "name": "TLSTEST", 00:16:37.701 "trtype": "TCP", 00:16:37.701 "adrfam": "IPv4", 00:16:37.701 "traddr": "10.0.0.3", 00:16:37.701 "trsvcid": "4420", 00:16:37.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.701 "prchk_reftag": false, 00:16:37.701 "prchk_guard": false, 00:16:37.701 "ctrlr_loss_timeout_sec": 0, 00:16:37.701 "reconnect_delay_sec": 0, 00:16:37.701 "fast_io_fail_timeout_sec": 0, 00:16:37.701 "psk": "key0", 00:16:37.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:37.701 "hdgst": false, 00:16:37.701 "ddgst": false, 00:16:37.701 "multipath": "multipath" 00:16:37.701 } 00:16:37.701 }, 00:16:37.701 { 00:16:37.701 "method": "bdev_nvme_set_hotplug", 00:16:37.701 "params": { 00:16:37.701 "period_us": 100000, 00:16:37.701 "enable": false 00:16:37.701 } 00:16:37.701 }, 00:16:37.701 { 00:16:37.701 "method": "bdev_wait_for_examine" 00:16:37.701 } 00:16:37.701 ] 00:16:37.701 }, 00:16:37.701 { 00:16:37.701 "subsystem": "nbd", 00:16:37.701 "config": [] 00:16:37.701 } 00:16:37.701 ] 00:16:37.701 }' 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72545 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72545 ']' 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72545 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72545 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:37.701 killing process with pid 72545 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72545' 00:16:37.701 Received shutdown signal, test time was about 10.000000 seconds 00:16:37.701 00:16:37.701 Latency(us) 00:16:37.701 [2024-12-09T04:06:19.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.701 [2024-12-09T04:06:19.651Z] =================================================================================================================== 00:16:37.701 [2024-12-09T04:06:19.651Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72545 00:16:37.701 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72545 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72495 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72495 ']' 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72495 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72495 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:37.961 killing process with pid 72495 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72495' 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72495 00:16:37.961 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72495 00:16:38.223 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:38.223 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:38.223 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.223 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:38.223 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:38.223 "subsystems": [ 00:16:38.223 { 00:16:38.223 "subsystem": "keyring", 00:16:38.223 "config": [ 00:16:38.223 { 00:16:38.223 "method": "keyring_file_add_key", 00:16:38.223 "params": { 00:16:38.223 "name": "key0", 00:16:38.223 "path": "/tmp/tmp.psT9wy31fj" 00:16:38.223 } 00:16:38.223 } 00:16:38.223 ] 00:16:38.223 }, 00:16:38.223 { 00:16:38.223 "subsystem": "iobuf", 00:16:38.223 "config": [ 00:16:38.223 { 00:16:38.223 "method": "iobuf_set_options", 00:16:38.223 "params": { 00:16:38.223 "small_pool_count": 8192, 00:16:38.223 "large_pool_count": 1024, 00:16:38.223 "small_bufsize": 8192, 00:16:38.223 "large_bufsize": 135168, 00:16:38.223 "enable_numa": false 00:16:38.223 } 00:16:38.223 } 00:16:38.223 ] 00:16:38.223 }, 00:16:38.223 { 00:16:38.223 "subsystem": "sock", 00:16:38.223 "config": [ 00:16:38.223 { 00:16:38.223 "method": "sock_set_default_impl", 00:16:38.223 "params": { 00:16:38.223 "impl_name": "uring" 00:16:38.223 } 00:16:38.223 }, 00:16:38.224 { 00:16:38.224 "method": "sock_impl_set_options", 00:16:38.224 "params": { 00:16:38.224 "impl_name": "ssl", 00:16:38.224 "recv_buf_size": 4096, 00:16:38.224 "send_buf_size": 4096, 00:16:38.224 "enable_recv_pipe": true, 00:16:38.224 "enable_quickack": false, 00:16:38.224 "enable_placement_id": 0, 00:16:38.224 "enable_zerocopy_send_server": true, 00:16:38.224 "enable_zerocopy_send_client": false, 00:16:38.224 "zerocopy_threshold": 0, 00:16:38.224 "tls_version": 0, 00:16:38.224 "enable_ktls": false 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "sock_impl_set_options", 00:16:38.224 "params": { 00:16:38.224 "impl_name": "posix", 00:16:38.224 "recv_buf_size": 2097152, 00:16:38.224 "send_buf_size": 2097152, 00:16:38.224 "enable_recv_pipe": true, 00:16:38.224 "enable_quickack": false, 00:16:38.224 "enable_placement_id": 0, 00:16:38.224 "enable_zerocopy_send_server": true, 00:16:38.224 "enable_zerocopy_send_client": false, 00:16:38.224 "zerocopy_threshold": 0, 00:16:38.224 "tls_version": 0, 00:16:38.224 "enable_ktls": false 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "sock_impl_set_options", 00:16:38.224 "params": { 00:16:38.224 "impl_name": "uring", 00:16:38.224 "recv_buf_size": 2097152, 00:16:38.224 "send_buf_size": 2097152, 00:16:38.224 "enable_recv_pipe": true, 00:16:38.224 "enable_quickack": false, 00:16:38.224 "enable_placement_id": 0, 00:16:38.224 "enable_zerocopy_send_server": false, 00:16:38.224 "enable_zerocopy_send_client": false, 00:16:38.224 "zerocopy_threshold": 0, 00:16:38.224 "tls_version": 0, 00:16:38.224 "enable_ktls": false 00:16:38.224 } 00:16:38.224 } 00:16:38.224 ] 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "subsystem": "vmd", 00:16:38.224 "config": [] 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "subsystem": "accel", 00:16:38.224 "config": [ 00:16:38.224 { 00:16:38.224 "method": "accel_set_options", 00:16:38.224 "params": { 00:16:38.224 "small_cache_size": 128, 00:16:38.224 "large_cache_size": 16, 00:16:38.224 "task_count": 2048, 00:16:38.224 "sequence_count": 2048, 00:16:38.224 "buf_count": 2048 00:16:38.224 } 00:16:38.224 } 00:16:38.224 ] 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "subsystem": "bdev", 00:16:38.224 "config": [ 00:16:38.224 { 00:16:38.224 "method": "bdev_set_options", 00:16:38.224 "params": { 00:16:38.224 "bdev_io_pool_size": 65535, 00:16:38.224 "bdev_io_cache_size": 256, 00:16:38.224 "bdev_auto_examine": true, 00:16:38.224 "iobuf_small_cache_size": 128, 00:16:38.224 "iobuf_large_cache_size": 16 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "bdev_raid_set_options", 00:16:38.224 "params": { 00:16:38.224 "process_window_size_kb": 1024, 00:16:38.224 "process_max_bandwidth_mb_sec": 0 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "bdev_iscsi_set_options", 00:16:38.224 "params": { 00:16:38.224 "timeout_sec": 30 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "bdev_nvme_set_options", 00:16:38.224 "params": { 00:16:38.224 "action_on_timeout": "none", 00:16:38.224 "timeout_us": 0, 00:16:38.224 "timeout_admin_us": 0, 00:16:38.224 "keep_alive_timeout_ms": 10000, 00:16:38.224 "arbitration_burst": 0, 00:16:38.224 "low_priority_weight": 0, 00:16:38.224 "medium_priority_weight": 0, 00:16:38.224 "high_priority_weight": 0, 00:16:38.224 "nvme_adminq_poll_period_us": 10000, 00:16:38.224 "nvme_ioq_poll_period_us": 0, 00:16:38.224 "io_queue_requests": 0, 00:16:38.224 "delay_cmd_submit": true, 00:16:38.224 "transport_retry_count": 4, 00:16:38.224 "bdev_retry_count": 3, 00:16:38.224 "transport_ack_timeout": 0, 00:16:38.224 "ctrlr_loss_timeout_sec": 0, 00:16:38.224 "reconnect_delay_sec": 0, 00:16:38.224 "fast_io_fail_timeout_sec": 0, 00:16:38.224 "disable_auto_failback": false, 00:16:38.224 "generate_uuids": false, 00:16:38.224 "transport_tos": 0, 00:16:38.224 "nvme_error_stat": false, 00:16:38.224 "rdma_srq_size": 0, 00:16:38.224 "io_path_stat": false, 00:16:38.224 "allow_accel_sequence": false, 00:16:38.224 "rdma_max_cq_size": 0, 00:16:38.224 "rdma_cm_event_timeout_ms": 0, 00:16:38.224 "dhchap_digests": [ 00:16:38.224 "sha256", 00:16:38.224 "sha384", 00:16:38.224 "sha512" 00:16:38.224 ], 00:16:38.224 "dhchap_dhgroups": [ 00:16:38.224 "null", 00:16:38.224 "ffdhe2048", 00:16:38.224 "ffdhe3072", 00:16:38.224 "ffdhe4096", 00:16:38.224 "ffdhe6144", 00:16:38.224 "ffdhe8192" 00:16:38.224 ] 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "bdev_nvme_set_hotplug", 00:16:38.224 "params": { 00:16:38.224 "period_us": 100000, 00:16:38.224 "enable": false 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "bdev_malloc_create", 00:16:38.224 "params": { 00:16:38.224 "name": "malloc0", 00:16:38.224 "num_blocks": 8192, 00:16:38.224 "block_size": 4096, 00:16:38.224 "physical_block_size": 4096, 00:16:38.224 "uuid": "19c851e3-b129-421b-9dbb-f12be68e6b4c", 00:16:38.224 "optimal_io_boundary": 0, 00:16:38.224 "md_size": 0, 00:16:38.224 "dif_type": 0, 00:16:38.224 "dif_is_head_of_md": false, 00:16:38.224 "dif_pi_format": 0 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "bdev_wait_for_examine" 00:16:38.224 } 00:16:38.224 ] 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "subsystem": "nbd", 00:16:38.224 "config": [] 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "subsystem": "scheduler", 00:16:38.224 "config": [ 00:16:38.224 { 00:16:38.224 "method": "framework_set_scheduler", 00:16:38.224 "params": { 00:16:38.224 "name": "static" 00:16:38.224 } 00:16:38.224 } 00:16:38.224 ] 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "subsystem": "nvmf", 00:16:38.224 "config": [ 00:16:38.224 { 00:16:38.224 "method": "nvmf_set_config", 00:16:38.224 "params": { 00:16:38.224 "discovery_filter": "match_any", 00:16:38.224 "admin_cmd_passthru": { 00:16:38.224 "identify_ctrlr": false 00:16:38.224 }, 00:16:38.224 "dhchap_digests": [ 00:16:38.224 "sha256", 00:16:38.224 "sha384", 00:16:38.224 "sha512" 00:16:38.224 ], 00:16:38.224 "dhchap_dhgroups": [ 00:16:38.224 "null", 00:16:38.224 "ffdhe2048", 00:16:38.224 "ffdhe3072", 00:16:38.224 "ffdhe4096", 00:16:38.224 "ffdhe6144", 00:16:38.224 "ffdhe8192" 00:16:38.224 ] 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "nvmf_set_max_subsystems", 00:16:38.224 "params": { 00:16:38.224 "max_subsystems": 1024 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "nvmf_set_crdt", 00:16:38.224 "params": { 00:16:38.224 "crdt1": 0, 00:16:38.224 "crdt2": 0, 00:16:38.224 "crdt3": 0 00:16:38.224 } 00:16:38.224 }, 00:16:38.224 { 00:16:38.224 "method": "nvmf_create_transport", 00:16:38.224 "params": { 00:16:38.225 "trtype": "TCP", 00:16:38.225 "max_queue_depth": 128, 00:16:38.225 "max_io_qpairs_per_ctrlr": 127, 00:16:38.225 "in_capsule_data_size": 4096, 00:16:38.225 "max_io_size": 131072, 00:16:38.225 "io_unit_size": 131072, 00:16:38.225 "max_aq_depth": 128, 00:16:38.225 "num_shared_buffers": 511, 00:16:38.225 "buf_cache_size": 4294967295, 00:16:38.225 "dif_insert_or_strip": false, 00:16:38.225 "zcopy": false, 00:16:38.225 "c2h_success": false, 00:16:38.225 "sock_priority": 0, 00:16:38.225 "abort_timeout_sec": 1, 00:16:38.225 "ack_timeout": 0, 00:16:38.225 "data_wr_pool_size": 0 00:16:38.225 } 00:16:38.225 }, 00:16:38.225 { 00:16:38.225 "method": "nvmf_create_subsystem", 00:16:38.225 "params": { 00:16:38.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.225 "allow_any_host": false, 00:16:38.225 "serial_number": "SPDK00000000000001", 00:16:38.225 "model_number": "SPDK bdev Controller", 00:16:38.225 "max_namespaces": 10, 00:16:38.225 "min_cntlid": 1, 00:16:38.225 "max_cntlid": 65519, 00:16:38.225 "ana_reporting": false 00:16:38.225 } 00:16:38.225 }, 00:16:38.225 { 00:16:38.225 "method": "nvmf_subsystem_add_host", 00:16:38.225 "params": { 00:16:38.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.225 "host": "nqn.2016-06.io.spdk:host1", 00:16:38.225 "psk": "key0" 00:16:38.225 } 00:16:38.225 }, 00:16:38.225 { 00:16:38.225 "method": "nvmf_subsystem_add_ns", 00:16:38.225 "params": { 00:16:38.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.225 "namespace": { 00:16:38.225 "nsid": 1, 00:16:38.225 "bdev_name": "malloc0", 00:16:38.225 "nguid": "19C851E3B129421B9DBBF12BE68E6B4C", 00:16:38.225 "uuid": "19c851e3-b129-421b-9dbb-f12be68e6b4c", 00:16:38.225 "no_auto_visible": false 00:16:38.225 } 00:16:38.225 } 00:16:38.225 }, 00:16:38.225 { 00:16:38.225 "method": "nvmf_subsystem_add_listener", 00:16:38.225 "params": { 00:16:38.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.225 "listen_address": { 00:16:38.225 "trtype": "TCP", 00:16:38.225 "adrfam": "IPv4", 00:16:38.225 "traddr": "10.0.0.3", 00:16:38.225 "trsvcid": "4420" 00:16:38.225 }, 00:16:38.225 "secure_channel": true 00:16:38.225 } 00:16:38.225 } 00:16:38.225 ] 00:16:38.225 } 00:16:38.225 ] 00:16:38.225 }' 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72595 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72595 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72595 ']' 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.225 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:38.485 [2024-12-09 04:06:20.172957] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:38.485 [2024-12-09 04:06:20.173092] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.485 [2024-12-09 04:06:20.312039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.485 [2024-12-09 04:06:20.382268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.486 [2024-12-09 04:06:20.382364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.486 [2024-12-09 04:06:20.382394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.486 [2024-12-09 04:06:20.382404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.486 [2024-12-09 04:06:20.382414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.486 [2024-12-09 04:06:20.383057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.744 [2024-12-09 04:06:20.568963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:38.744 [2024-12-09 04:06:20.666042] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.002 [2024-12-09 04:06:20.697986] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:39.002 [2024-12-09 04:06:20.698252] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72627 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72627 /var/tmp/bdevperf.sock 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72627 ']' 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:39.262 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:39.262 "subsystems": [ 00:16:39.262 { 00:16:39.262 "subsystem": "keyring", 00:16:39.262 "config": [ 00:16:39.262 { 00:16:39.262 "method": "keyring_file_add_key", 00:16:39.262 "params": { 00:16:39.262 "name": "key0", 00:16:39.262 "path": "/tmp/tmp.psT9wy31fj" 00:16:39.262 } 00:16:39.262 } 00:16:39.262 ] 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "subsystem": "iobuf", 00:16:39.262 "config": [ 00:16:39.262 { 00:16:39.262 "method": "iobuf_set_options", 00:16:39.262 "params": { 00:16:39.262 "small_pool_count": 8192, 00:16:39.262 "large_pool_count": 1024, 00:16:39.262 "small_bufsize": 8192, 00:16:39.262 "large_bufsize": 135168, 00:16:39.262 "enable_numa": false 00:16:39.262 } 00:16:39.262 } 00:16:39.262 ] 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "subsystem": "sock", 00:16:39.262 "config": [ 00:16:39.262 { 00:16:39.262 "method": "sock_set_default_impl", 00:16:39.262 "params": { 00:16:39.262 "impl_name": "uring" 00:16:39.262 } 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "method": "sock_impl_set_options", 00:16:39.262 "params": { 00:16:39.262 "impl_name": "ssl", 00:16:39.262 "recv_buf_size": 4096, 00:16:39.262 "send_buf_size": 4096, 00:16:39.262 "enable_recv_pipe": true, 00:16:39.262 "enable_quickack": false, 00:16:39.262 "enable_placement_id": 0, 00:16:39.262 "enable_zerocopy_send_server": true, 00:16:39.262 "enable_zerocopy_send_client": false, 00:16:39.262 "zerocopy_threshold": 0, 00:16:39.262 "tls_version": 0, 00:16:39.262 "enable_ktls": false 00:16:39.262 } 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "method": "sock_impl_set_options", 00:16:39.262 "params": { 00:16:39.262 "impl_name": "posix", 00:16:39.262 "recv_buf_size": 2097152, 00:16:39.262 "send_buf_size": 2097152, 00:16:39.262 "enable_recv_pipe": true, 00:16:39.262 "enable_quickack": false, 00:16:39.262 "enable_placement_id": 0, 00:16:39.262 "enable_zerocopy_send_server": true, 00:16:39.262 "enable_zerocopy_send_client": false, 00:16:39.262 "zerocopy_threshold": 0, 00:16:39.262 "tls_version": 0, 00:16:39.262 "enable_ktls": false 00:16:39.262 } 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "method": "sock_impl_set_options", 00:16:39.262 "params": { 00:16:39.262 "impl_name": "uring", 00:16:39.262 "recv_buf_size": 2097152, 00:16:39.262 "send_buf_size": 2097152, 00:16:39.262 "enable_recv_pipe": true, 00:16:39.262 "enable_quickack": false, 00:16:39.262 "enable_placement_id": 0, 00:16:39.262 "enable_zerocopy_send_server": false, 00:16:39.262 "enable_zerocopy_send_client": false, 00:16:39.262 "zerocopy_threshold": 0, 00:16:39.262 "tls_version": 0, 00:16:39.262 "enable_ktls": false 00:16:39.262 } 00:16:39.262 } 00:16:39.262 ] 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "subsystem": "vmd", 00:16:39.262 "config": [] 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "subsystem": "accel", 00:16:39.262 "config": [ 00:16:39.262 { 00:16:39.262 "method": "accel_set_options", 00:16:39.262 "params": { 00:16:39.262 "small_cache_size": 128, 00:16:39.262 "large_cache_size": 16, 00:16:39.262 "task_count": 2048, 00:16:39.262 "sequence_count": 2048, 00:16:39.262 "buf_count": 2048 00:16:39.262 } 00:16:39.262 } 00:16:39.262 ] 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "subsystem": "bdev", 00:16:39.262 "config": [ 00:16:39.262 { 00:16:39.262 "method": "bdev_set_options", 00:16:39.262 "params": { 00:16:39.262 "bdev_io_pool_size": 65535, 00:16:39.262 "bdev_io_cache_size": 256, 00:16:39.262 "bdev_auto_examine": true, 00:16:39.262 "iobuf_small_cache_size": 128, 00:16:39.262 "iobuf_large_cache_size": 16 00:16:39.262 } 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "method": "bdev_raid_set_options", 00:16:39.262 "params": { 00:16:39.262 "process_window_size_kb": 1024, 00:16:39.262 "process_max_bandwidth_mb_sec": 0 00:16:39.262 } 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "method": "bdev_iscsi_set_options", 00:16:39.262 "params": { 00:16:39.262 "timeout_sec": 30 00:16:39.262 } 00:16:39.262 }, 00:16:39.262 { 00:16:39.262 "method": "bdev_nvme_set_options", 00:16:39.262 "params": { 00:16:39.262 "action_on_timeout": "none", 00:16:39.262 "timeout_us": 0, 00:16:39.262 "timeout_admin_us": 0, 00:16:39.262 "keep_alive_timeout_ms": 10000, 00:16:39.262 "arbitration_burst": 0, 00:16:39.262 "low_priority_weight": 0, 00:16:39.262 "medium_priority_weight": 0, 00:16:39.262 "high_priority_weight": 0, 00:16:39.262 "nvme_adminq_poll_period_us": 10000, 00:16:39.262 "nvme_ioq_poll_period_us": 0, 00:16:39.262 "io_queue_requests": 512, 00:16:39.262 "delay_cmd_submit": true, 00:16:39.262 "transport_retry_count": 4, 00:16:39.262 "bdev_retry_count": 3, 00:16:39.262 "transport_ack_timeout": 0, 00:16:39.262 "ctrlr_loss_timeout_sec": 0, 00:16:39.262 "reconnect_delay_sec": 0, 00:16:39.262 "fast_io_fail_timeout_sec": 0, 00:16:39.262 "disable_auto_failback": false, 00:16:39.262 "generate_uuids": false, 00:16:39.263 "transport_tos": 0, 00:16:39.263 "nvme_error_stat": false, 00:16:39.263 "rdma_srq_size": 0, 00:16:39.263 "io_path_stat": false, 00:16:39.263 "allow_accel_sequence": false, 00:16:39.263 "rdma_max_cq_size": 0, 00:16:39.263 "rdma_cm_event_timeout_ms": 0, 00:16:39.263 "dhchap_digests": [ 00:16:39.263 "sha256", 00:16:39.263 "sha384", 00:16:39.263 "sha512" 00:16:39.263 ], 00:16:39.263 "dhchap_dhgroups": [ 00:16:39.263 "null", 00:16:39.263 "ffdhe2048", 00:16:39.263 "ffdhe3072", 00:16:39.263 "ffdhe4096", 00:16:39.263 "ffdhe6144", 00:16:39.263 "ffdhe8192" 00:16:39.263 ] 00:16:39.263 } 00:16:39.263 }, 00:16:39.263 { 00:16:39.263 "method": "bdev_nvme_attach_controller", 00:16:39.263 "params": { 00:16:39.263 "name": "TLSTEST", 00:16:39.263 "trtype": "TCP", 00:16:39.263 "adrfam": "IPv4", 00:16:39.263 "traddr": "10.0.0.3", 00:16:39.263 "trsvcid": "4420", 00:16:39.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:39.263 "prchk_reftag": false, 00:16:39.263 "prchk_guard": false, 00:16:39.263 "ctrlr_loss_timeout_sec": 0, 00:16:39.263 "reconnect_delay_sec": 0, 00:16:39.263 "fast_io_fail_timeout_sec": 0, 00:16:39.263 "psk": "key0", 00:16:39.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:39.263 "hdgst": false, 00:16:39.263 "ddgst": false, 00:16:39.263 "multipath": "multipath" 00:16:39.263 } 00:16:39.263 }, 00:16:39.263 { 00:16:39.263 "method": "bdev_nvme_set_hotplug", 00:16:39.263 "params": { 00:16:39.263 "period_us": 100000, 00:16:39.263 "enable": false 00:16:39.263 } 00:16:39.263 }, 00:16:39.263 { 00:16:39.263 "method": "bdev_wait_for_examine" 00:16:39.263 } 00:16:39.263 ] 00:16:39.263 }, 00:16:39.263 { 00:16:39.263 "subsystem": "nbd", 00:16:39.263 "config": [] 00:16:39.263 } 00:16:39.263 ] 00:16:39.263 }' 00:16:39.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:39.263 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:39.263 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.263 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.522 [2024-12-09 04:06:21.244031] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:39.522 [2024-12-09 04:06:21.244647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72627 ] 00:16:39.522 [2024-12-09 04:06:21.390296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.781 [2024-12-09 04:06:21.471101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.781 [2024-12-09 04:06:21.636057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:39.781 [2024-12-09 04:06:21.707275] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:40.718 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.718 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:40.718 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:40.718 Running I/O for 10 seconds... 00:16:42.607 4608.00 IOPS, 18.00 MiB/s [2024-12-09T04:06:25.490Z] 4568.00 IOPS, 17.84 MiB/s [2024-12-09T04:06:26.874Z] 4566.00 IOPS, 17.84 MiB/s [2024-12-09T04:06:27.810Z] 4620.50 IOPS, 18.05 MiB/s [2024-12-09T04:06:28.744Z] 4654.00 IOPS, 18.18 MiB/s [2024-12-09T04:06:29.742Z] 4677.67 IOPS, 18.27 MiB/s [2024-12-09T04:06:30.677Z] 4669.14 IOPS, 18.24 MiB/s [2024-12-09T04:06:31.637Z] 4626.50 IOPS, 18.07 MiB/s [2024-12-09T04:06:32.570Z] 4591.33 IOPS, 17.93 MiB/s [2024-12-09T04:06:32.570Z] 4573.10 IOPS, 17.86 MiB/s 00:16:50.620 Latency(us) 00:16:50.620 [2024-12-09T04:06:32.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.620 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:50.620 Verification LBA range: start 0x0 length 0x2000 00:16:50.620 TLSTESTn1 : 10.01 4579.21 17.89 0.00 0.00 27904.23 4527.94 20733.21 00:16:50.620 [2024-12-09T04:06:32.570Z] =================================================================================================================== 00:16:50.620 [2024-12-09T04:06:32.570Z] Total : 4579.21 17.89 0.00 0.00 27904.23 4527.94 20733.21 00:16:50.620 { 00:16:50.620 "results": [ 00:16:50.620 { 00:16:50.620 "job": "TLSTESTn1", 00:16:50.620 "core_mask": "0x4", 00:16:50.620 "workload": "verify", 00:16:50.620 "status": "finished", 00:16:50.620 "verify_range": { 00:16:50.620 "start": 0, 00:16:50.620 "length": 8192 00:16:50.620 }, 00:16:50.620 "queue_depth": 128, 00:16:50.620 "io_size": 4096, 00:16:50.620 "runtime": 10.013962, 00:16:50.620 "iops": 4579.206511868129, 00:16:50.620 "mibps": 17.88752543698488, 00:16:50.620 "io_failed": 0, 00:16:50.620 "io_timeout": 0, 00:16:50.620 "avg_latency_us": 27904.231915244563, 00:16:50.620 "min_latency_us": 4527.941818181818, 00:16:50.620 "max_latency_us": 20733.20727272727 00:16:50.620 } 00:16:50.620 ], 00:16:50.620 "core_count": 1 00:16:50.620 } 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72627 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72627 ']' 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72627 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72627 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:50.620 killing process with pid 72627 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72627' 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72627 00:16:50.620 Received shutdown signal, test time was about 10.000000 seconds 00:16:50.620 00:16:50.620 Latency(us) 00:16:50.620 [2024-12-09T04:06:32.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.620 [2024-12-09T04:06:32.570Z] =================================================================================================================== 00:16:50.620 [2024-12-09T04:06:32.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.620 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72627 00:16:50.878 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72595 00:16:50.878 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72595 ']' 00:16:50.878 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72595 00:16:51.136 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:51.136 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.136 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72595 00:16:51.136 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:51.136 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:51.136 killing process with pid 72595 00:16:51.136 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72595' 00:16:51.136 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72595 00:16:51.136 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72595 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72773 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72773 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72773 ']' 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.393 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.393 [2024-12-09 04:06:33.220846] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:51.393 [2024-12-09 04:06:33.220975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.651 [2024-12-09 04:06:33.375026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.651 [2024-12-09 04:06:33.445462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.651 [2024-12-09 04:06:33.445552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.651 [2024-12-09 04:06:33.445587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.651 [2024-12-09 04:06:33.445599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.651 [2024-12-09 04:06:33.445609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.651 [2024-12-09 04:06:33.446160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.651 [2024-12-09 04:06:33.523599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.psT9wy31fj 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.psT9wy31fj 00:16:52.583 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:52.583 [2024-12-09 04:06:34.527920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.840 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:52.840 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:53.098 [2024-12-09 04:06:35.028050] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.098 [2024-12-09 04:06:35.028416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:53.362 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:53.362 malloc0 00:16:53.362 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:53.619 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:53.877 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72823 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72823 /var/tmp/bdevperf.sock 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72823 ']' 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.135 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.135 [2024-12-09 04:06:36.001756] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:54.135 [2024-12-09 04:06:36.001857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72823 ] 00:16:54.394 [2024-12-09 04:06:36.153562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.394 [2024-12-09 04:06:36.225680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.394 [2024-12-09 04:06:36.303965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:54.962 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.962 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:54.962 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:55.530 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:55.530 [2024-12-09 04:06:37.385460] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:55.530 nvme0n1 00:16:55.530 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:55.789 Running I/O for 1 seconds... 00:16:56.726 4736.00 IOPS, 18.50 MiB/s 00:16:56.726 Latency(us) 00:16:56.726 [2024-12-09T04:06:38.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.726 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:56.726 Verification LBA range: start 0x0 length 0x2000 00:16:56.726 nvme0n1 : 1.03 4744.12 18.53 0.00 0.00 26719.39 8757.99 20018.27 00:16:56.726 [2024-12-09T04:06:38.676Z] =================================================================================================================== 00:16:56.726 [2024-12-09T04:06:38.676Z] Total : 4744.12 18.53 0.00 0.00 26719.39 8757.99 20018.27 00:16:56.726 { 00:16:56.726 "results": [ 00:16:56.726 { 00:16:56.726 "job": "nvme0n1", 00:16:56.726 "core_mask": "0x2", 00:16:56.726 "workload": "verify", 00:16:56.726 "status": "finished", 00:16:56.726 "verify_range": { 00:16:56.726 "start": 0, 00:16:56.726 "length": 8192 00:16:56.726 }, 00:16:56.726 "queue_depth": 128, 00:16:56.726 "io_size": 4096, 00:16:56.726 "runtime": 1.025269, 00:16:56.726 "iops": 4744.12081122125, 00:16:56.726 "mibps": 18.531721918833007, 00:16:56.726 "io_failed": 0, 00:16:56.726 "io_timeout": 0, 00:16:56.726 "avg_latency_us": 26719.393684210525, 00:16:56.726 "min_latency_us": 8757.992727272727, 00:16:56.726 "max_latency_us": 20018.269090909092 00:16:56.726 } 00:16:56.726 ], 00:16:56.726 "core_count": 1 00:16:56.726 } 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72823 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72823 ']' 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72823 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72823 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:56.726 killing process with pid 72823 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72823' 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72823 00:16:56.726 Received shutdown signal, test time was about 1.000000 seconds 00:16:56.726 00:16:56.726 Latency(us) 00:16:56.726 [2024-12-09T04:06:38.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.726 [2024-12-09T04:06:38.676Z] =================================================================================================================== 00:16:56.726 [2024-12-09T04:06:38.676Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.726 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72823 00:16:56.985 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72773 00:16:56.985 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72773 ']' 00:16:56.985 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72773 00:16:56.985 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:57.243 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.243 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72773 00:16:57.243 killing process with pid 72773 00:16:57.243 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.243 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.243 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72773' 00:16:57.243 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72773 00:16:57.243 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72773 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72880 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72880 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72880 ']' 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.501 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.501 [2024-12-09 04:06:39.291836] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:57.501 [2024-12-09 04:06:39.291928] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.501 [2024-12-09 04:06:39.441009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.759 [2024-12-09 04:06:39.496781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.759 [2024-12-09 04:06:39.496853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.759 [2024-12-09 04:06:39.496883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.759 [2024-12-09 04:06:39.496890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.759 [2024-12-09 04:06:39.496897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.759 [2024-12-09 04:06:39.497333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.759 [2024-12-09 04:06:39.571431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.759 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.759 [2024-12-09 04:06:39.696772] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.018 malloc0 00:16:58.018 [2024-12-09 04:06:39.733016] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.018 [2024-12-09 04:06:39.733311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72899 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72899 /var/tmp/bdevperf.sock 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72899 ']' 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.018 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.018 [2024-12-09 04:06:39.828141] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:16:58.018 [2024-12-09 04:06:39.828458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72899 ] 00:16:58.276 [2024-12-09 04:06:39.974930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.276 [2024-12-09 04:06:40.048048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.276 [2024-12-09 04:06:40.126232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:59.211 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.211 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:59.211 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.psT9wy31fj 00:16:59.211 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:59.469 [2024-12-09 04:06:41.341291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.469 nvme0n1 00:16:59.727 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:59.727 Running I/O for 1 seconds... 00:17:00.660 4608.00 IOPS, 18.00 MiB/s 00:17:00.660 Latency(us) 00:17:00.660 [2024-12-09T04:06:42.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.660 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:00.660 Verification LBA range: start 0x0 length 0x2000 00:17:00.660 nvme0n1 : 1.02 4646.88 18.15 0.00 0.00 27269.65 6970.65 17873.45 00:17:00.660 [2024-12-09T04:06:42.610Z] =================================================================================================================== 00:17:00.660 [2024-12-09T04:06:42.610Z] Total : 4646.88 18.15 0.00 0.00 27269.65 6970.65 17873.45 00:17:00.660 { 00:17:00.660 "results": [ 00:17:00.660 { 00:17:00.660 "job": "nvme0n1", 00:17:00.660 "core_mask": "0x2", 00:17:00.660 "workload": "verify", 00:17:00.660 "status": "finished", 00:17:00.660 "verify_range": { 00:17:00.660 "start": 0, 00:17:00.660 "length": 8192 00:17:00.660 }, 00:17:00.660 "queue_depth": 128, 00:17:00.660 "io_size": 4096, 00:17:00.660 "runtime": 1.019179, 00:17:00.660 "iops": 4646.8775357420045, 00:17:00.660 "mibps": 18.151865373992205, 00:17:00.660 "io_failed": 0, 00:17:00.660 "io_timeout": 0, 00:17:00.660 "avg_latency_us": 27269.64520884521, 00:17:00.660 "min_latency_us": 6970.647272727273, 00:17:00.660 "max_latency_us": 17873.454545454544 00:17:00.660 } 00:17:00.660 ], 00:17:00.660 "core_count": 1 00:17:00.660 } 00:17:00.660 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:00.660 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.660 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.930 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.930 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:00.930 "subsystems": [ 00:17:00.930 { 00:17:00.930 "subsystem": "keyring", 00:17:00.930 "config": [ 00:17:00.930 { 00:17:00.930 "method": "keyring_file_add_key", 00:17:00.930 "params": { 00:17:00.930 "name": "key0", 00:17:00.930 "path": "/tmp/tmp.psT9wy31fj" 00:17:00.930 } 00:17:00.930 } 00:17:00.930 ] 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "subsystem": "iobuf", 00:17:00.930 "config": [ 00:17:00.930 { 00:17:00.930 "method": "iobuf_set_options", 00:17:00.930 "params": { 00:17:00.930 "small_pool_count": 8192, 00:17:00.930 "large_pool_count": 1024, 00:17:00.930 "small_bufsize": 8192, 00:17:00.930 "large_bufsize": 135168, 00:17:00.930 "enable_numa": false 00:17:00.930 } 00:17:00.930 } 00:17:00.930 ] 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "subsystem": "sock", 00:17:00.930 "config": [ 00:17:00.930 { 00:17:00.930 "method": "sock_set_default_impl", 00:17:00.930 "params": { 00:17:00.930 "impl_name": "uring" 00:17:00.930 } 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "method": "sock_impl_set_options", 00:17:00.930 "params": { 00:17:00.930 "impl_name": "ssl", 00:17:00.930 "recv_buf_size": 4096, 00:17:00.930 "send_buf_size": 4096, 00:17:00.930 "enable_recv_pipe": true, 00:17:00.930 "enable_quickack": false, 00:17:00.930 "enable_placement_id": 0, 00:17:00.930 "enable_zerocopy_send_server": true, 00:17:00.930 "enable_zerocopy_send_client": false, 00:17:00.930 "zerocopy_threshold": 0, 00:17:00.930 "tls_version": 0, 00:17:00.930 "enable_ktls": false 00:17:00.930 } 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "method": "sock_impl_set_options", 00:17:00.930 "params": { 00:17:00.930 "impl_name": "posix", 00:17:00.930 "recv_buf_size": 2097152, 00:17:00.930 "send_buf_size": 2097152, 00:17:00.930 "enable_recv_pipe": true, 00:17:00.930 "enable_quickack": false, 00:17:00.930 "enable_placement_id": 0, 00:17:00.930 "enable_zerocopy_send_server": true, 00:17:00.930 "enable_zerocopy_send_client": false, 00:17:00.930 "zerocopy_threshold": 0, 00:17:00.930 "tls_version": 0, 00:17:00.930 "enable_ktls": false 00:17:00.930 } 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "method": "sock_impl_set_options", 00:17:00.930 "params": { 00:17:00.930 "impl_name": "uring", 00:17:00.930 "recv_buf_size": 2097152, 00:17:00.930 "send_buf_size": 2097152, 00:17:00.930 "enable_recv_pipe": true, 00:17:00.930 "enable_quickack": false, 00:17:00.930 "enable_placement_id": 0, 00:17:00.930 "enable_zerocopy_send_server": false, 00:17:00.930 "enable_zerocopy_send_client": false, 00:17:00.930 "zerocopy_threshold": 0, 00:17:00.930 "tls_version": 0, 00:17:00.930 "enable_ktls": false 00:17:00.930 } 00:17:00.930 } 00:17:00.930 ] 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "subsystem": "vmd", 00:17:00.930 "config": [] 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "subsystem": "accel", 00:17:00.930 "config": [ 00:17:00.930 { 00:17:00.930 "method": "accel_set_options", 00:17:00.930 "params": { 00:17:00.930 "small_cache_size": 128, 00:17:00.930 "large_cache_size": 16, 00:17:00.930 "task_count": 2048, 00:17:00.930 "sequence_count": 2048, 00:17:00.930 "buf_count": 2048 00:17:00.930 } 00:17:00.930 } 00:17:00.930 ] 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "subsystem": "bdev", 00:17:00.930 "config": [ 00:17:00.930 { 00:17:00.930 "method": "bdev_set_options", 00:17:00.930 "params": { 00:17:00.930 "bdev_io_pool_size": 65535, 00:17:00.930 "bdev_io_cache_size": 256, 00:17:00.930 "bdev_auto_examine": true, 00:17:00.930 "iobuf_small_cache_size": 128, 00:17:00.930 "iobuf_large_cache_size": 16 00:17:00.930 } 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "method": "bdev_raid_set_options", 00:17:00.930 "params": { 00:17:00.930 "process_window_size_kb": 1024, 00:17:00.930 "process_max_bandwidth_mb_sec": 0 00:17:00.930 } 00:17:00.930 }, 00:17:00.930 { 00:17:00.930 "method": "bdev_iscsi_set_options", 00:17:00.931 "params": { 00:17:00.931 "timeout_sec": 30 00:17:00.931 } 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "method": "bdev_nvme_set_options", 00:17:00.931 "params": { 00:17:00.931 "action_on_timeout": "none", 00:17:00.931 "timeout_us": 0, 00:17:00.931 "timeout_admin_us": 0, 00:17:00.931 "keep_alive_timeout_ms": 10000, 00:17:00.931 "arbitration_burst": 0, 00:17:00.931 "low_priority_weight": 0, 00:17:00.931 "medium_priority_weight": 0, 00:17:00.931 "high_priority_weight": 0, 00:17:00.931 "nvme_adminq_poll_period_us": 10000, 00:17:00.931 "nvme_ioq_poll_period_us": 0, 00:17:00.931 "io_queue_requests": 0, 00:17:00.931 "delay_cmd_submit": true, 00:17:00.931 "transport_retry_count": 4, 00:17:00.931 "bdev_retry_count": 3, 00:17:00.931 "transport_ack_timeout": 0, 00:17:00.931 "ctrlr_loss_timeout_sec": 0, 00:17:00.931 "reconnect_delay_sec": 0, 00:17:00.931 "fast_io_fail_timeout_sec": 0, 00:17:00.931 "disable_auto_failback": false, 00:17:00.931 "generate_uuids": false, 00:17:00.931 "transport_tos": 0, 00:17:00.931 "nvme_error_stat": false, 00:17:00.931 "rdma_srq_size": 0, 00:17:00.931 "io_path_stat": false, 00:17:00.931 "allow_accel_sequence": false, 00:17:00.931 "rdma_max_cq_size": 0, 00:17:00.931 "rdma_cm_event_timeout_ms": 0, 00:17:00.931 "dhchap_digests": [ 00:17:00.931 "sha256", 00:17:00.931 "sha384", 00:17:00.931 "sha512" 00:17:00.931 ], 00:17:00.931 "dhchap_dhgroups": [ 00:17:00.931 "null", 00:17:00.931 "ffdhe2048", 00:17:00.931 "ffdhe3072", 00:17:00.931 "ffdhe4096", 00:17:00.931 "ffdhe6144", 00:17:00.931 "ffdhe8192" 00:17:00.931 ] 00:17:00.931 } 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "method": "bdev_nvme_set_hotplug", 00:17:00.931 "params": { 00:17:00.931 "period_us": 100000, 00:17:00.931 "enable": false 00:17:00.931 } 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "method": "bdev_malloc_create", 00:17:00.931 "params": { 00:17:00.931 "name": "malloc0", 00:17:00.931 "num_blocks": 8192, 00:17:00.931 "block_size": 4096, 00:17:00.931 "physical_block_size": 4096, 00:17:00.931 "uuid": "d05e66bb-f10d-4f91-adc0-d0cb03e565e0", 00:17:00.931 "optimal_io_boundary": 0, 00:17:00.931 "md_size": 0, 00:17:00.931 "dif_type": 0, 00:17:00.931 "dif_is_head_of_md": false, 00:17:00.931 "dif_pi_format": 0 00:17:00.931 } 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "method": "bdev_wait_for_examine" 00:17:00.931 } 00:17:00.931 ] 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "subsystem": "nbd", 00:17:00.931 "config": [] 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "subsystem": "scheduler", 00:17:00.931 "config": [ 00:17:00.931 { 00:17:00.931 "method": "framework_set_scheduler", 00:17:00.931 "params": { 00:17:00.931 "name": "static" 00:17:00.931 } 00:17:00.931 } 00:17:00.931 ] 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "subsystem": "nvmf", 00:17:00.931 "config": [ 00:17:00.931 { 00:17:00.931 "method": "nvmf_set_config", 00:17:00.931 "params": { 00:17:00.931 "discovery_filter": "match_any", 00:17:00.931 "admin_cmd_passthru": { 00:17:00.931 "identify_ctrlr": false 00:17:00.931 }, 00:17:00.931 "dhchap_digests": [ 00:17:00.931 "sha256", 00:17:00.931 "sha384", 00:17:00.931 "sha512" 00:17:00.931 ], 00:17:00.931 "dhchap_dhgroups": [ 00:17:00.931 "null", 00:17:00.931 "ffdhe2048", 00:17:00.931 "ffdhe3072", 00:17:00.931 "ffdhe4096", 00:17:00.931 "ffdhe6144", 00:17:00.931 "ffdhe8192" 00:17:00.931 ] 00:17:00.931 } 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "method": "nvmf_set_max_subsystems", 00:17:00.931 "params": { 00:17:00.931 "max_subsystems": 1024 00:17:00.931 } 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "method": "nvmf_set_crdt", 00:17:00.931 "params": { 00:17:00.931 "crdt1": 0, 00:17:00.931 "crdt2": 0, 00:17:00.931 "crdt3": 0 00:17:00.931 } 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "method": "nvmf_create_transport", 00:17:00.931 "params": { 00:17:00.931 "trtype": "TCP", 00:17:00.931 "max_queue_depth": 128, 00:17:00.931 "max_io_qpairs_per_ctrlr": 127, 00:17:00.931 "in_capsule_data_size": 4096, 00:17:00.931 "max_io_size": 131072, 00:17:00.931 "io_unit_size": 131072, 00:17:00.931 "max_aq_depth": 128, 00:17:00.931 "num_shared_buffers": 511, 00:17:00.931 "buf_cache_size": 4294967295, 00:17:00.931 "dif_insert_or_strip": false, 00:17:00.931 "zcopy": false, 00:17:00.931 "c2h_success": false, 00:17:00.931 "sock_priority": 0, 00:17:00.931 "abort_timeout_sec": 1, 00:17:00.931 "ack_timeout": 0, 00:17:00.931 "data_wr_pool_size": 0 00:17:00.931 } 00:17:00.931 }, 00:17:00.931 { 00:17:00.931 "method": "nvmf_create_subsystem", 00:17:00.931 "params": { 00:17:00.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.931 "allow_any_host": false, 00:17:00.931 "serial_number": "00000000000000000000", 00:17:00.931 "model_number": "SPDK bdev Controller", 00:17:00.931 "max_namespaces": 32, 00:17:00.931 "min_cntlid": 1, 00:17:00.931 "max_cntlid": 65519, 00:17:00.931 "ana_reporting": false 00:17:00.931 } 00:17:00.931 }, 00:17:00.932 { 00:17:00.932 "method": "nvmf_subsystem_add_host", 00:17:00.932 "params": { 00:17:00.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.932 "host": "nqn.2016-06.io.spdk:host1", 00:17:00.932 "psk": "key0" 00:17:00.932 } 00:17:00.932 }, 00:17:00.932 { 00:17:00.932 "method": "nvmf_subsystem_add_ns", 00:17:00.932 "params": { 00:17:00.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.932 "namespace": { 00:17:00.932 "nsid": 1, 00:17:00.932 "bdev_name": "malloc0", 00:17:00.932 "nguid": "D05E66BBF10D4F91ADC0D0CB03E565E0", 00:17:00.932 "uuid": "d05e66bb-f10d-4f91-adc0-d0cb03e565e0", 00:17:00.932 "no_auto_visible": false 00:17:00.932 } 00:17:00.932 } 00:17:00.932 }, 00:17:00.932 { 00:17:00.932 "method": "nvmf_subsystem_add_listener", 00:17:00.932 "params": { 00:17:00.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.932 "listen_address": { 00:17:00.932 "trtype": "TCP", 00:17:00.932 "adrfam": "IPv4", 00:17:00.932 "traddr": "10.0.0.3", 00:17:00.932 "trsvcid": "4420" 00:17:00.932 }, 00:17:00.932 "secure_channel": false, 00:17:00.932 "sock_impl": "ssl" 00:17:00.932 } 00:17:00.932 } 00:17:00.932 ] 00:17:00.932 } 00:17:00.932 ] 00:17:00.932 }' 00:17:00.932 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:01.190 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:01.190 "subsystems": [ 00:17:01.190 { 00:17:01.190 "subsystem": "keyring", 00:17:01.190 "config": [ 00:17:01.190 { 00:17:01.190 "method": "keyring_file_add_key", 00:17:01.190 "params": { 00:17:01.190 "name": "key0", 00:17:01.190 "path": "/tmp/tmp.psT9wy31fj" 00:17:01.190 } 00:17:01.190 } 00:17:01.190 ] 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "subsystem": "iobuf", 00:17:01.190 "config": [ 00:17:01.190 { 00:17:01.190 "method": "iobuf_set_options", 00:17:01.190 "params": { 00:17:01.190 "small_pool_count": 8192, 00:17:01.190 "large_pool_count": 1024, 00:17:01.190 "small_bufsize": 8192, 00:17:01.190 "large_bufsize": 135168, 00:17:01.190 "enable_numa": false 00:17:01.190 } 00:17:01.190 } 00:17:01.190 ] 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "subsystem": "sock", 00:17:01.190 "config": [ 00:17:01.190 { 00:17:01.190 "method": "sock_set_default_impl", 00:17:01.190 "params": { 00:17:01.190 "impl_name": "uring" 00:17:01.190 } 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "method": "sock_impl_set_options", 00:17:01.190 "params": { 00:17:01.190 "impl_name": "ssl", 00:17:01.190 "recv_buf_size": 4096, 00:17:01.190 "send_buf_size": 4096, 00:17:01.190 "enable_recv_pipe": true, 00:17:01.190 "enable_quickack": false, 00:17:01.190 "enable_placement_id": 0, 00:17:01.190 "enable_zerocopy_send_server": true, 00:17:01.190 "enable_zerocopy_send_client": false, 00:17:01.190 "zerocopy_threshold": 0, 00:17:01.190 "tls_version": 0, 00:17:01.190 "enable_ktls": false 00:17:01.190 } 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "method": "sock_impl_set_options", 00:17:01.190 "params": { 00:17:01.190 "impl_name": "posix", 00:17:01.190 "recv_buf_size": 2097152, 00:17:01.190 "send_buf_size": 2097152, 00:17:01.190 "enable_recv_pipe": true, 00:17:01.190 "enable_quickack": false, 00:17:01.190 "enable_placement_id": 0, 00:17:01.190 "enable_zerocopy_send_server": true, 00:17:01.190 "enable_zerocopy_send_client": false, 00:17:01.190 "zerocopy_threshold": 0, 00:17:01.190 "tls_version": 0, 00:17:01.190 "enable_ktls": false 00:17:01.190 } 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "method": "sock_impl_set_options", 00:17:01.190 "params": { 00:17:01.190 "impl_name": "uring", 00:17:01.190 "recv_buf_size": 2097152, 00:17:01.190 "send_buf_size": 2097152, 00:17:01.190 "enable_recv_pipe": true, 00:17:01.190 "enable_quickack": false, 00:17:01.190 "enable_placement_id": 0, 00:17:01.190 "enable_zerocopy_send_server": false, 00:17:01.190 "enable_zerocopy_send_client": false, 00:17:01.190 "zerocopy_threshold": 0, 00:17:01.190 "tls_version": 0, 00:17:01.190 "enable_ktls": false 00:17:01.190 } 00:17:01.190 } 00:17:01.190 ] 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "subsystem": "vmd", 00:17:01.190 "config": [] 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "subsystem": "accel", 00:17:01.190 "config": [ 00:17:01.190 { 00:17:01.190 "method": "accel_set_options", 00:17:01.190 "params": { 00:17:01.190 "small_cache_size": 128, 00:17:01.190 "large_cache_size": 16, 00:17:01.190 "task_count": 2048, 00:17:01.190 "sequence_count": 2048, 00:17:01.190 "buf_count": 2048 00:17:01.190 } 00:17:01.190 } 00:17:01.190 ] 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "subsystem": "bdev", 00:17:01.190 "config": [ 00:17:01.190 { 00:17:01.190 "method": "bdev_set_options", 00:17:01.190 "params": { 00:17:01.190 "bdev_io_pool_size": 65535, 00:17:01.190 "bdev_io_cache_size": 256, 00:17:01.190 "bdev_auto_examine": true, 00:17:01.190 "iobuf_small_cache_size": 128, 00:17:01.190 "iobuf_large_cache_size": 16 00:17:01.190 } 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "method": "bdev_raid_set_options", 00:17:01.190 "params": { 00:17:01.190 "process_window_size_kb": 1024, 00:17:01.190 "process_max_bandwidth_mb_sec": 0 00:17:01.190 } 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "method": "bdev_iscsi_set_options", 00:17:01.190 "params": { 00:17:01.190 "timeout_sec": 30 00:17:01.190 } 00:17:01.190 }, 00:17:01.190 { 00:17:01.190 "method": "bdev_nvme_set_options", 00:17:01.190 "params": { 00:17:01.190 "action_on_timeout": "none", 00:17:01.190 "timeout_us": 0, 00:17:01.190 "timeout_admin_us": 0, 00:17:01.190 "keep_alive_timeout_ms": 10000, 00:17:01.190 "arbitration_burst": 0, 00:17:01.190 "low_priority_weight": 0, 00:17:01.191 "medium_priority_weight": 0, 00:17:01.191 "high_priority_weight": 0, 00:17:01.191 "nvme_adminq_poll_period_us": 10000, 00:17:01.191 "nvme_ioq_poll_period_us": 0, 00:17:01.191 "io_queue_requests": 512, 00:17:01.191 "delay_cmd_submit": true, 00:17:01.191 "transport_retry_count": 4, 00:17:01.191 "bdev_retry_count": 3, 00:17:01.191 "transport_ack_timeout": 0, 00:17:01.191 "ctrlr_loss_timeout_sec": 0, 00:17:01.191 "reconnect_delay_sec": 0, 00:17:01.191 "fast_io_fail_timeout_sec": 0, 00:17:01.191 "disable_auto_failback": false, 00:17:01.191 "generate_uuids": false, 00:17:01.191 "transport_tos": 0, 00:17:01.191 "nvme_error_stat": false, 00:17:01.191 "rdma_srq_size": 0, 00:17:01.191 "io_path_stat": false, 00:17:01.191 "allow_accel_sequence": false, 00:17:01.191 "rdma_max_cq_size": 0, 00:17:01.191 "rdma_cm_event_timeout_ms": 0, 00:17:01.191 "dhchap_digests": [ 00:17:01.191 "sha256", 00:17:01.191 "sha384", 00:17:01.191 "sha512" 00:17:01.191 ], 00:17:01.191 "dhchap_dhgroups": [ 00:17:01.191 "null", 00:17:01.191 "ffdhe2048", 00:17:01.191 "ffdhe3072", 00:17:01.191 "ffdhe4096", 00:17:01.191 "ffdhe6144", 00:17:01.191 "ffdhe8192" 00:17:01.191 ] 00:17:01.191 } 00:17:01.191 }, 00:17:01.191 { 00:17:01.191 "method": "bdev_nvme_attach_controller", 00:17:01.191 "params": { 00:17:01.191 "name": "nvme0", 00:17:01.191 "trtype": "TCP", 00:17:01.191 "adrfam": "IPv4", 00:17:01.191 "traddr": "10.0.0.3", 00:17:01.191 "trsvcid": "4420", 00:17:01.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.191 "prchk_reftag": false, 00:17:01.191 "prchk_guard": false, 00:17:01.191 "ctrlr_loss_timeout_sec": 0, 00:17:01.191 "reconnect_delay_sec": 0, 00:17:01.191 "fast_io_fail_timeout_sec": 0, 00:17:01.191 "psk": "key0", 00:17:01.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.191 "hdgst": false, 00:17:01.191 "ddgst": false, 00:17:01.191 "multipath": "multipath" 00:17:01.191 } 00:17:01.191 }, 00:17:01.191 { 00:17:01.191 "method": "bdev_nvme_set_hotplug", 00:17:01.191 "params": { 00:17:01.191 "period_us": 100000, 00:17:01.191 "enable": false 00:17:01.191 } 00:17:01.191 }, 00:17:01.191 { 00:17:01.191 "method": "bdev_enable_histogram", 00:17:01.191 "params": { 00:17:01.191 "name": "nvme0n1", 00:17:01.191 "enable": true 00:17:01.191 } 00:17:01.191 }, 00:17:01.191 { 00:17:01.191 "method": "bdev_wait_for_examine" 00:17:01.191 } 00:17:01.191 ] 00:17:01.191 }, 00:17:01.191 { 00:17:01.191 "subsystem": "nbd", 00:17:01.191 "config": [] 00:17:01.191 } 00:17:01.191 ] 00:17:01.191 }' 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72899 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72899 ']' 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72899 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72899 00:17:01.191 killing process with pid 72899 00:17:01.191 Received shutdown signal, test time was about 1.000000 seconds 00:17:01.191 00:17:01.191 Latency(us) 00:17:01.191 [2024-12-09T04:06:43.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.191 [2024-12-09T04:06:43.141Z] =================================================================================================================== 00:17:01.191 [2024-12-09T04:06:43.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72899' 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72899 00:17:01.191 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72899 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72880 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72880 ']' 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72880 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72880 00:17:01.450 killing process with pid 72880 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72880' 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72880 00:17:01.450 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72880 00:17:02.017 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:02.017 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.017 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:02.017 "subsystems": [ 00:17:02.017 { 00:17:02.017 "subsystem": "keyring", 00:17:02.017 "config": [ 00:17:02.017 { 00:17:02.017 "method": "keyring_file_add_key", 00:17:02.017 "params": { 00:17:02.017 "name": "key0", 00:17:02.017 "path": "/tmp/tmp.psT9wy31fj" 00:17:02.017 } 00:17:02.017 } 00:17:02.017 ] 00:17:02.017 }, 00:17:02.017 { 00:17:02.017 "subsystem": "iobuf", 00:17:02.017 "config": [ 00:17:02.017 { 00:17:02.017 "method": "iobuf_set_options", 00:17:02.017 "params": { 00:17:02.017 "small_pool_count": 8192, 00:17:02.017 "large_pool_count": 1024, 00:17:02.017 "small_bufsize": 8192, 00:17:02.017 "large_bufsize": 135168, 00:17:02.017 "enable_numa": false 00:17:02.017 } 00:17:02.017 } 00:17:02.017 ] 00:17:02.017 }, 00:17:02.017 { 00:17:02.017 "subsystem": "sock", 00:17:02.017 "config": [ 00:17:02.017 { 00:17:02.017 "method": "sock_set_default_impl", 00:17:02.017 "params": { 00:17:02.017 "impl_name": "uring" 00:17:02.017 } 00:17:02.017 }, 00:17:02.017 { 00:17:02.017 "method": "sock_impl_set_options", 00:17:02.017 "params": { 00:17:02.017 "impl_name": "ssl", 00:17:02.017 "recv_buf_size": 4096, 00:17:02.017 "send_buf_size": 4096, 00:17:02.017 "enable_recv_pipe": true, 00:17:02.017 "enable_quickack": false, 00:17:02.017 "enable_placement_id": 0, 00:17:02.017 "enable_zerocopy_send_server": true, 00:17:02.017 "enable_zerocopy_send_client": false, 00:17:02.017 "zerocopy_threshold": 0, 00:17:02.017 "tls_version": 0, 00:17:02.017 "enable_ktls": false 00:17:02.017 } 00:17:02.017 }, 00:17:02.017 { 00:17:02.017 "method": "sock_impl_set_options", 00:17:02.017 "params": { 00:17:02.017 "impl_name": "posix", 00:17:02.017 "recv_buf_size": 2097152, 00:17:02.017 "send_buf_size": 2097152, 00:17:02.017 "enable_recv_pipe": true, 00:17:02.017 "enable_quickack": false, 00:17:02.017 "enable_placement_id": 0, 00:17:02.017 "enable_zerocopy_send_server": true, 00:17:02.017 "enable_zerocopy_send_client": false, 00:17:02.017 "zerocopy_threshold": 0, 00:17:02.017 "tls_version": 0, 00:17:02.017 "enable_ktls": false 00:17:02.017 } 00:17:02.017 }, 00:17:02.017 { 00:17:02.017 "method": "sock_impl_set_options", 00:17:02.017 "params": { 00:17:02.017 "impl_name": "uring", 00:17:02.017 "recv_buf_size": 2097152, 00:17:02.017 "send_buf_size": 2097152, 00:17:02.017 "enable_recv_pipe": true, 00:17:02.017 "enable_quickack": false, 00:17:02.017 "enable_placement_id": 0, 00:17:02.017 "enable_zerocopy_send_server": false, 00:17:02.017 "enable_zerocopy_send_client": false, 00:17:02.017 "zerocopy_threshold": 0, 00:17:02.017 "tls_version": 0, 00:17:02.017 "enable_ktls": false 00:17:02.017 } 00:17:02.017 } 00:17:02.017 ] 00:17:02.017 }, 00:17:02.017 { 00:17:02.017 "subsystem": "vmd", 00:17:02.017 "config": [] 00:17:02.017 }, 00:17:02.017 { 00:17:02.018 "subsystem": "accel", 00:17:02.018 "config": [ 00:17:02.018 { 00:17:02.018 "method": "accel_set_options", 00:17:02.018 "params": { 00:17:02.018 "small_cache_size": 128, 00:17:02.018 "large_cache_size": 16, 00:17:02.018 "task_count": 2048, 00:17:02.018 "sequence_count": 2048, 00:17:02.018 "buf_count": 2048 00:17:02.018 } 00:17:02.018 } 00:17:02.018 ] 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "subsystem": "bdev", 00:17:02.018 "config": [ 00:17:02.018 { 00:17:02.018 "method": "bdev_set_options", 00:17:02.018 "params": { 00:17:02.018 "bdev_io_pool_size": 65535, 00:17:02.018 "bdev_io_cache_size": 256, 00:17:02.018 "bdev_auto_examine": true, 00:17:02.018 "iobuf_small_cache_size": 128, 00:17:02.018 "iobuf_large_cache_size": 16 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "bdev_raid_set_options", 00:17:02.018 "params": { 00:17:02.018 "process_window_size_kb": 1024, 00:17:02.018 "process_max_bandwidth_mb_sec": 0 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "bdev_iscsi_set_options", 00:17:02.018 "params": { 00:17:02.018 "timeout_sec": 30 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "bdev_nvme_set_options", 00:17:02.018 "params": { 00:17:02.018 "action_on_timeout": "none", 00:17:02.018 "timeout_us": 0, 00:17:02.018 "timeout_admin_us": 0, 00:17:02.018 "keep_alive_timeout_ms": 10000, 00:17:02.018 "arbitration_burst": 0, 00:17:02.018 "low_priority_weight": 0, 00:17:02.018 "medium_priority_weight": 0, 00:17:02.018 "high_priority_weight": 0, 00:17:02.018 "nvme_adminq_poll_period_us": 10000, 00:17:02.018 "nvme_ioq_poll_period_us": 0, 00:17:02.018 "io_queue_requests": 0, 00:17:02.018 "delay_cmd_submit": true, 00:17:02.018 "transport_retry_count": 4, 00:17:02.018 "bdev_retry_count": 3, 00:17:02.018 "transport_ack_timeout": 0, 00:17:02.018 "ctrlr_loss_timeout_sec": 0, 00:17:02.018 "reconnect_delay_sec": 0, 00:17:02.018 "fast_io_fail_timeout_sec": 0, 00:17:02.018 "disable_auto_failback": false, 00:17:02.018 "generate_uuids": false, 00:17:02.018 "transport_tos": 0, 00:17:02.018 "nvme_error_stat": false, 00:17:02.018 "rdma_srq_size": 0, 00:17:02.018 "io_path_stat": false, 00:17:02.018 "allow_accel_sequence": false, 00:17:02.018 "rdma_max_cq_size": 0, 00:17:02.018 "rdma_cm_event_timeout_ms": 0, 00:17:02.018 "dhchap_digests": [ 00:17:02.018 "sha256", 00:17:02.018 "sha384", 00:17:02.018 "sha512" 00:17:02.018 ], 00:17:02.018 "dhchap_dhgroups": [ 00:17:02.018 "null", 00:17:02.018 "ffdhe2048", 00:17:02.018 "ffdhe3072", 00:17:02.018 "ffdhe4096", 00:17:02.018 "ffdhe6144", 00:17:02.018 "ffdhe8192" 00:17:02.018 ] 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "bdev_nvme_set_hotplug", 00:17:02.018 "params": { 00:17:02.018 "period_us": 100000, 00:17:02.018 "enable": false 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "bdev_malloc_create", 00:17:02.018 "params": { 00:17:02.018 "name": "malloc0", 00:17:02.018 "num_blocks": 8192, 00:17:02.018 "block_size": 4096, 00:17:02.018 "physical_block_size": 4096, 00:17:02.018 "uuid": "d05e66bb-f10d-4f91-adc0-d0cb03e565e0", 00:17:02.018 "optimal_io_boundary": 0, 00:17:02.018 "md_size": 0, 00:17:02.018 "dif_type": 0, 00:17:02.018 "dif_is_head_of_md": false, 00:17:02.018 "dif_pi_format": 0 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "bdev_wait_for_examine" 00:17:02.018 } 00:17:02.018 ] 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "subsystem": "nbd", 00:17:02.018 "config": [] 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "subsystem": "scheduler", 00:17:02.018 "config": [ 00:17:02.018 { 00:17:02.018 "method": "framework_set_scheduler", 00:17:02.018 "params": { 00:17:02.018 "name": "static" 00:17:02.018 } 00:17:02.018 } 00:17:02.018 ] 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "subsystem": "nvmf", 00:17:02.018 "config": [ 00:17:02.018 { 00:17:02.018 "method": "nvmf_set_config", 00:17:02.018 "params": { 00:17:02.018 "discovery_filter": "match_any", 00:17:02.018 "admin_cmd_passthru": { 00:17:02.018 "identify_ctrlr": false 00:17:02.018 }, 00:17:02.018 "dhchap_digests": [ 00:17:02.018 "sha256", 00:17:02.018 "sha384", 00:17:02.018 "sha512" 00:17:02.018 ], 00:17:02.018 "dhchap_dhgroups": [ 00:17:02.018 "null", 00:17:02.018 "ffdhe2048", 00:17:02.018 "ffdhe3072", 00:17:02.018 "ffdhe4096", 00:17:02.018 "ffdhe6144", 00:17:02.018 "ffdhe8192" 00:17:02.018 ] 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "nvmf_set_max_subsyste 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.018 ms", 00:17:02.018 "params": { 00:17:02.018 "max_subsystems": 1024 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "nvmf_set_crdt", 00:17:02.018 "params": { 00:17:02.018 "crdt1": 0, 00:17:02.018 "crdt2": 0, 00:17:02.018 "crdt3": 0 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "nvmf_create_transport", 00:17:02.018 "params": { 00:17:02.018 "trtype": "TCP", 00:17:02.018 "max_queue_depth": 128, 00:17:02.018 "max_io_qpairs_per_ctrlr": 127, 00:17:02.018 "in_capsule_data_size": 4096, 00:17:02.018 "max_io_size": 131072, 00:17:02.018 "io_unit_size": 131072, 00:17:02.018 "max_aq_depth": 128, 00:17:02.018 "num_shared_buffers": 511, 00:17:02.018 "buf_cache_size": 4294967295, 00:17:02.018 "dif_insert_or_strip": false, 00:17:02.018 "zcopy": false, 00:17:02.018 "c2h_success": false, 00:17:02.018 "sock_priority": 0, 00:17:02.018 "abort_timeout_sec": 1, 00:17:02.018 "ack_timeout": 0, 00:17:02.018 "data_wr_pool_size": 0 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "nvmf_create_subsystem", 00:17:02.018 "params": { 00:17:02.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.018 "allow_any_host": false, 00:17:02.018 "serial_number": "00000000000000000000", 00:17:02.018 "model_number": "SPDK bdev Controller", 00:17:02.018 "max_namespaces": 32, 00:17:02.018 "min_cntlid": 1, 00:17:02.018 "max_cntlid": 65519, 00:17:02.018 "ana_reporting": false 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "nvmf_subsystem_add_host", 00:17:02.018 "params": { 00:17:02.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.018 "host": "nqn.2016-06.io.spdk:host1", 00:17:02.018 "psk": "key0" 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "nvmf_subsystem_add_ns", 00:17:02.018 "params": { 00:17:02.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.018 "namespace": { 00:17:02.018 "nsid": 1, 00:17:02.018 "bdev_name": "malloc0", 00:17:02.018 "nguid": "D05E66BBF10D4F91ADC0D0CB03E565E0", 00:17:02.018 "uuid": "d05e66bb-f10d-4f91-adc0-d0cb03e565e0", 00:17:02.018 "no_auto_visible": false 00:17:02.018 } 00:17:02.018 } 00:17:02.018 }, 00:17:02.018 { 00:17:02.018 "method": "nvmf_subsystem_add_listener", 00:17:02.018 "params": { 00:17:02.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.018 "listen_address": { 00:17:02.018 "trtype": "TCP", 00:17:02.018 "adrfam": "IPv4", 00:17:02.018 "traddr": "10.0.0.3", 00:17:02.018 "trsvcid": "4420" 00:17:02.018 }, 00:17:02.018 "secure_channel": false, 00:17:02.018 "sock_impl": "ssl" 00:17:02.018 } 00:17:02.018 } 00:17:02.018 ] 00:17:02.018 } 00:17:02.018 ] 00:17:02.018 }' 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72965 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72965 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72965 ']' 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.018 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.018 [2024-12-09 04:06:43.727755] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:02.018 [2024-12-09 04:06:43.727853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.018 [2024-12-09 04:06:43.870772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.018 [2024-12-09 04:06:43.942439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.019 [2024-12-09 04:06:43.942501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.019 [2024-12-09 04:06:43.942527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.019 [2024-12-09 04:06:43.942535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.019 [2024-12-09 04:06:43.942542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.019 [2024-12-09 04:06:43.943061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.276 [2024-12-09 04:06:44.132338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.534 [2024-12-09 04:06:44.232966] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.534 [2024-12-09 04:06:44.264920] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:02.534 [2024-12-09 04:06:44.265145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72998 00:17:02.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72998 /var/tmp/bdevperf.sock 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72998 ']' 00:17:02.793 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:02.793 "subsystems": [ 00:17:02.793 { 00:17:02.793 "subsystem": "keyring", 00:17:02.793 "config": [ 00:17:02.793 { 00:17:02.793 "method": "keyring_file_add_key", 00:17:02.793 "params": { 00:17:02.793 "name": "key0", 00:17:02.793 "path": "/tmp/tmp.psT9wy31fj" 00:17:02.793 } 00:17:02.793 } 00:17:02.793 ] 00:17:02.793 }, 00:17:02.793 { 00:17:02.793 "subsystem": "iobuf", 00:17:02.793 "config": [ 00:17:02.793 { 00:17:02.793 "method": "iobuf_set_options", 00:17:02.793 "params": { 00:17:02.793 "small_pool_count": 8192, 00:17:02.793 "large_pool_count": 1024, 00:17:02.793 "small_bufsize": 8192, 00:17:02.793 "large_bufsize": 135168, 00:17:02.793 "enable_numa": false 00:17:02.793 } 00:17:02.793 } 00:17:02.793 ] 00:17:02.793 }, 00:17:02.793 { 00:17:02.793 "subsystem": "sock", 00:17:02.793 "config": [ 00:17:02.793 { 00:17:02.793 "method": "sock_set_default_impl", 00:17:02.793 "params": { 00:17:02.793 "impl_name": "uring" 00:17:02.793 } 00:17:02.793 }, 00:17:02.793 { 00:17:02.793 "method": "sock_impl_set_options", 00:17:02.793 "params": { 00:17:02.793 "impl_name": "ssl", 00:17:02.793 "recv_buf_size": 4096, 00:17:02.793 "send_buf_size": 4096, 00:17:02.793 "enable_recv_pipe": true, 00:17:02.793 "enable_quickack": false, 00:17:02.793 "enable_placement_id": 0, 00:17:02.793 "enable_zerocopy_send_server": true, 00:17:02.793 "enable_zerocopy_send_client": false, 00:17:02.793 "zerocopy_threshold": 0, 00:17:02.793 "tls_version": 0, 00:17:02.793 "enable_ktls": false 00:17:02.793 } 00:17:02.793 }, 00:17:02.793 { 00:17:02.793 "method": "sock_impl_set_options", 00:17:02.793 "params": { 00:17:02.793 "impl_name": "posix", 00:17:02.794 "recv_buf_size": 2097152, 00:17:02.794 "send_buf_size": 2097152, 00:17:02.794 "enable_recv_pipe": true, 00:17:02.794 "enable_quickack": false, 00:17:02.794 "enable_placement_id": 0, 00:17:02.794 "enable_zerocopy_send_server": true, 00:17:02.794 "enable_zerocopy_send_client": false, 00:17:02.794 "zerocopy_threshold": 0, 00:17:02.794 "tls_version": 0, 00:17:02.794 "enable_ktls": false 00:17:02.794 } 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "method": "sock_impl_set_options", 00:17:02.794 "params": { 00:17:02.794 "impl_name": "uring", 00:17:02.794 "recv_buf_size": 2097152, 00:17:02.794 "send_buf_size": 2097152, 00:17:02.794 "enable_recv_pipe": true, 00:17:02.794 "enable_quickack": false, 00:17:02.794 "enable_placement_id": 0, 00:17:02.794 "enable_zerocopy_send_server": false, 00:17:02.794 "enable_zerocopy_send_client": false, 00:17:02.794 "zerocopy_threshold": 0, 00:17:02.794 "tls_version": 0, 00:17:02.794 "enable_ktls": false 00:17:02.794 } 00:17:02.794 } 00:17:02.794 ] 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "subsystem": "vmd", 00:17:02.794 "config": [] 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "subsystem": "accel", 00:17:02.794 "config": [ 00:17:02.794 { 00:17:02.794 "method": "accel_set_options", 00:17:02.794 "params": { 00:17:02.794 "small_cache_size": 128, 00:17:02.794 "large_cache_size": 16, 00:17:02.794 "task_count": 2048, 00:17:02.794 "sequence_count": 2048, 00:17:02.794 "buf_count": 2048 00:17:02.794 } 00:17:02.794 } 00:17:02.794 ] 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "subsystem": "bdev", 00:17:02.794 "config": [ 00:17:02.794 { 00:17:02.794 "method": "bdev_set_options", 00:17:02.794 "params": { 00:17:02.794 "bdev_io_pool_size": 65535, 00:17:02.794 "bdev_io_cache_size": 256, 00:17:02.794 "bdev_auto_examine": true, 00:17:02.794 "iobuf_small_cache_size": 128, 00:17:02.794 "iobuf_large_cache_size": 16 00:17:02.794 } 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "method": "bdev_raid_set_options", 00:17:02.794 "params": { 00:17:02.794 "process_window_size_kb": 1024, 00:17:02.794 "process_max_bandwidth_mb_sec": 0 00:17:02.794 } 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "method": "bdev_iscsi_set_options", 00:17:02.794 "params": { 00:17:02.794 "timeout_sec": 30 00:17:02.794 } 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "method": "bdev_nvme_set_options", 00:17:02.794 "params": { 00:17:02.794 "action_on_timeout": "none", 00:17:02.794 "timeout_us": 0, 00:17:02.794 "timeout_admin_us": 0, 00:17:02.794 "keep_alive_timeout_ms": 10000, 00:17:02.794 "arbitration_burst": 0, 00:17:02.794 "low_priority_weight": 0, 00:17:02.794 "medium_priority_weight": 0, 00:17:02.794 "high_priority_weight": 0, 00:17:02.794 "nvme_adminq_poll_period_us": 10000, 00:17:02.794 "nvme_ioq_poll_period_us": 0, 00:17:02.794 "io_queue_requests": 512, 00:17:02.794 "delay_cmd_submit": true, 00:17:02.794 "transport_retry_count": 4, 00:17:02.794 "bdev_retry_count": 3, 00:17:02.794 "transport_ack_timeout": 0, 00:17:02.794 "ctrlr_loss_timeout_sec": 0, 00:17:02.794 "reconnect_delay_sec": 0, 00:17:02.794 "fast_io_fail_timeout_sec": 0, 00:17:02.794 "disable_auto_failback": false, 00:17:02.794 "generate_uuids": false, 00:17:02.794 "transport_tos": 0, 00:17:02.794 "nvme_error_stat": false, 00:17:02.794 "rdma_srq_size": 0, 00:17:02.794 "io_path_stat": false, 00:17:02.794 "allow_accel_sequence": false, 00:17:02.794 "rdma_max_cq_size": 0, 00:17:02.794 "rdma_cm_event_timeout_ms": 0, 00:17:02.794 "dhchap_digests": [ 00:17:02.794 "sha256", 00:17:02.794 "sha384", 00:17:02.794 "sha512" 00:17:02.794 ], 00:17:02.794 "dhchap_dhgroups": [ 00:17:02.794 "null", 00:17:02.794 "ffdhe2048", 00:17:02.794 "ffdhe3072", 00:17:02.794 "ffdhe4096", 00:17:02.794 "ffdhe6144", 00:17:02.794 "ffdhe8192" 00:17:02.794 ] 00:17:02.794 } 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "method": "bdev_nvme_attach_controller", 00:17:02.794 "params": { 00:17:02.794 "name": "nvme0", 00:17:02.794 "trtype": "TCP", 00:17:02.794 "adrfam": "IPv4", 00:17:02.794 "traddr": "10.0.0.3", 00:17:02.794 "trsvcid": "4420", 00:17:02.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.794 "prchk_reftag": false, 00:17:02.794 "prchk_guard": false, 00:17:02.794 "ctrlr_loss_timeout_sec": 0, 00:17:02.794 "reconnect_delay_sec": 0, 00:17:02.794 "fast_io_fail_timeout_sec": 0, 00:17:02.794 "psk": "key0", 00:17:02.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.794 "hdgst": false, 00:17:02.794 "ddgst": false, 00:17:02.794 "multipath": "multipath" 00:17:02.794 } 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "method": "bdev_nvme_set_hotplug", 00:17:02.794 "params": { 00:17:02.794 "period_us": 100000, 00:17:02.794 "enable": false 00:17:02.794 } 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "method": "bdev_enable_histogram", 00:17:02.794 "params": { 00:17:02.794 "name": "nvme0n1", 00:17:02.794 "enable": true 00:17:02.794 } 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "method": "bdev_wait_for_examine" 00:17:02.794 } 00:17:02.794 ] 00:17:02.794 }, 00:17:02.794 { 00:17:02.794 "subsystem": "nbd", 00:17:02.794 "config": [] 00:17:02.794 } 00:17:02.794 ] 00:17:02.794 }' 00:17:02.794 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:02.794 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.794 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.794 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.794 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.794 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.053 [2024-12-09 04:06:44.797340] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:03.053 [2024-12-09 04:06:44.798405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72998 ] 00:17:03.053 [2024-12-09 04:06:44.948406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.310 [2024-12-09 04:06:45.019258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.310 [2024-12-09 04:06:45.175857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:03.310 [2024-12-09 04:06:45.242466] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.877 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.877 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:03.877 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:03.877 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:04.134 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.134 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:04.392 Running I/O for 1 seconds... 00:17:05.328 4608.00 IOPS, 18.00 MiB/s 00:17:05.328 Latency(us) 00:17:05.328 [2024-12-09T04:06:47.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.328 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:05.328 Verification LBA range: start 0x0 length 0x2000 00:17:05.328 nvme0n1 : 1.01 4668.14 18.23 0.00 0.00 27154.50 7298.33 18707.55 00:17:05.328 [2024-12-09T04:06:47.278Z] =================================================================================================================== 00:17:05.328 [2024-12-09T04:06:47.278Z] Total : 4668.14 18.23 0.00 0.00 27154.50 7298.33 18707.55 00:17:05.328 { 00:17:05.328 "results": [ 00:17:05.328 { 00:17:05.328 "job": "nvme0n1", 00:17:05.328 "core_mask": "0x2", 00:17:05.328 "workload": "verify", 00:17:05.328 "status": "finished", 00:17:05.328 "verify_range": { 00:17:05.328 "start": 0, 00:17:05.328 "length": 8192 00:17:05.328 }, 00:17:05.328 "queue_depth": 128, 00:17:05.328 "io_size": 4096, 00:17:05.328 "runtime": 1.014537, 00:17:05.328 "iops": 4668.139259583436, 00:17:05.328 "mibps": 18.234918982747796, 00:17:05.328 "io_failed": 0, 00:17:05.328 "io_timeout": 0, 00:17:05.328 "avg_latency_us": 27154.50181818182, 00:17:05.328 "min_latency_us": 7298.327272727272, 00:17:05.328 "max_latency_us": 18707.54909090909 00:17:05.328 } 00:17:05.328 ], 00:17:05.328 "core_count": 1 00:17:05.328 } 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:05.328 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:05.329 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:05.329 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:05.329 nvmf_trace.0 00:17:05.329 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:05.329 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72998 00:17:05.329 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72998 ']' 00:17:05.329 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72998 00:17:05.329 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:05.588 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.588 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72998 00:17:05.588 killing process with pid 72998 00:17:05.588 Received shutdown signal, test time was about 1.000000 seconds 00:17:05.588 00:17:05.588 Latency(us) 00:17:05.588 [2024-12-09T04:06:47.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.588 [2024-12-09T04:06:47.538Z] =================================================================================================================== 00:17:05.588 [2024-12-09T04:06:47.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.588 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:05.588 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:05.588 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72998' 00:17:05.588 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72998 00:17:05.588 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72998 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.846 rmmod nvme_tcp 00:17:05.846 rmmod nvme_fabrics 00:17:05.846 rmmod nvme_keyring 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72965 ']' 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72965 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72965 ']' 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72965 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:05.846 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.847 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72965 00:17:05.847 killing process with pid 72965 00:17:05.847 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.847 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.847 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72965' 00:17:05.847 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72965 00:17:05.847 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72965 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:06.120 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Hif2z2dKA3 /tmp/tmp.G0qa89hRkC /tmp/tmp.psT9wy31fj 00:17:06.408 ************************************ 00:17:06.408 END TEST nvmf_tls 00:17:06.408 ************************************ 00:17:06.408 00:17:06.408 real 1m30.052s 00:17:06.408 user 2m27.661s 00:17:06.408 sys 0m28.189s 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.408 ************************************ 00:17:06.408 START TEST nvmf_fips 00:17:06.408 ************************************ 00:17:06.408 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:06.669 * Looking for test storage... 00:17:06.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:06.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.669 --rc genhtml_branch_coverage=1 00:17:06.669 --rc genhtml_function_coverage=1 00:17:06.669 --rc genhtml_legend=1 00:17:06.669 --rc geninfo_all_blocks=1 00:17:06.669 --rc geninfo_unexecuted_blocks=1 00:17:06.669 00:17:06.669 ' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:06.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.669 --rc genhtml_branch_coverage=1 00:17:06.669 --rc genhtml_function_coverage=1 00:17:06.669 --rc genhtml_legend=1 00:17:06.669 --rc geninfo_all_blocks=1 00:17:06.669 --rc geninfo_unexecuted_blocks=1 00:17:06.669 00:17:06.669 ' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:06.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.669 --rc genhtml_branch_coverage=1 00:17:06.669 --rc genhtml_function_coverage=1 00:17:06.669 --rc genhtml_legend=1 00:17:06.669 --rc geninfo_all_blocks=1 00:17:06.669 --rc geninfo_unexecuted_blocks=1 00:17:06.669 00:17:06.669 ' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:06.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.669 --rc genhtml_branch_coverage=1 00:17:06.669 --rc genhtml_function_coverage=1 00:17:06.669 --rc genhtml_legend=1 00:17:06.669 --rc geninfo_all_blocks=1 00:17:06.669 --rc geninfo_unexecuted_blocks=1 00:17:06.669 00:17:06.669 ' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.669 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.669 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.670 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:06.928 Error setting digest 00:17:06.928 4092F8B24A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:06.928 4092F8B24A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.928 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:06.929 Cannot find device "nvmf_init_br" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:06.929 Cannot find device "nvmf_init_br2" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:06.929 Cannot find device "nvmf_tgt_br" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.929 Cannot find device "nvmf_tgt_br2" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:06.929 Cannot find device "nvmf_init_br" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:06.929 Cannot find device "nvmf_init_br2" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:06.929 Cannot find device "nvmf_tgt_br" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:06.929 Cannot find device "nvmf_tgt_br2" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:06.929 Cannot find device "nvmf_br" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:06.929 Cannot find device "nvmf_init_if" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:06.929 Cannot find device "nvmf_init_if2" 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.929 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:06.929 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:07.187 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:07.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:17:07.187 00:17:07.187 --- 10.0.0.3 ping statistics --- 00:17:07.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.187 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:07.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:07.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:07.187 00:17:07.187 --- 10.0.0.4 ping statistics --- 00:17:07.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.187 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:07.187 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:07.187 00:17:07.187 --- 10.0.0.1 ping statistics --- 00:17:07.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.188 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:07.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:07.188 00:17:07.188 --- 10.0.0.2 ping statistics --- 00:17:07.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.188 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73314 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73314 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73314 ']' 00:17:07.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.188 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:07.446 [2024-12-09 04:06:49.198259] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:07.446 [2024-12-09 04:06:49.198593] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.446 [2024-12-09 04:06:49.349418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.704 [2024-12-09 04:06:49.409767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.704 [2024-12-09 04:06:49.409837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.704 [2024-12-09 04:06:49.409863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.704 [2024-12-09 04:06:49.409871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.704 [2024-12-09 04:06:49.409878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.704 [2024-12-09 04:06:49.410311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.704 [2024-12-09 04:06:49.487314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.nmJ 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.nmJ 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.nmJ 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.nmJ 00:17:08.269 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.527 [2024-12-09 04:06:50.470152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.786 [2024-12-09 04:06:50.486068] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.786 [2024-12-09 04:06:50.486381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.786 malloc0 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73357 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73357 /var/tmp/bdevperf.sock 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73357 ']' 00:17:08.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.786 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:08.786 [2024-12-09 04:06:50.626700] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:08.786 [2024-12-09 04:06:50.626796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73357 ] 00:17:09.043 [2024-12-09 04:06:50.772823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.043 [2024-12-09 04:06:50.851070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.043 [2024-12-09 04:06:50.930543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:09.977 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.977 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:09.977 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.nmJ 00:17:09.977 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:10.234 [2024-12-09 04:06:52.059529] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.234 TLSTESTn1 00:17:10.234 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:10.492 Running I/O for 10 seconds... 00:17:12.363 4425.00 IOPS, 17.29 MiB/s [2024-12-09T04:06:55.688Z] 4456.00 IOPS, 17.41 MiB/s [2024-12-09T04:06:56.622Z] 4530.33 IOPS, 17.70 MiB/s [2024-12-09T04:06:57.568Z] 4559.75 IOPS, 17.81 MiB/s [2024-12-09T04:06:58.549Z] 4588.00 IOPS, 17.92 MiB/s [2024-12-09T04:06:59.496Z] 4607.33 IOPS, 18.00 MiB/s [2024-12-09T04:07:00.431Z] 4606.71 IOPS, 17.99 MiB/s [2024-12-09T04:07:01.364Z] 4612.00 IOPS, 18.02 MiB/s [2024-12-09T04:07:02.300Z] 4618.67 IOPS, 18.04 MiB/s [2024-12-09T04:07:02.300Z] 4627.50 IOPS, 18.08 MiB/s 00:17:20.350 Latency(us) 00:17:20.350 [2024-12-09T04:07:02.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.350 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:20.350 Verification LBA range: start 0x0 length 0x2000 00:17:20.350 TLSTESTn1 : 10.02 4631.01 18.09 0.00 0.00 27581.90 6285.50 32648.84 00:17:20.350 [2024-12-09T04:07:02.300Z] =================================================================================================================== 00:17:20.350 [2024-12-09T04:07:02.300Z] Total : 4631.01 18.09 0.00 0.00 27581.90 6285.50 32648.84 00:17:20.350 { 00:17:20.350 "results": [ 00:17:20.350 { 00:17:20.350 "job": "TLSTESTn1", 00:17:20.350 "core_mask": "0x4", 00:17:20.350 "workload": "verify", 00:17:20.350 "status": "finished", 00:17:20.350 "verify_range": { 00:17:20.350 "start": 0, 00:17:20.350 "length": 8192 00:17:20.350 }, 00:17:20.350 "queue_depth": 128, 00:17:20.350 "io_size": 4096, 00:17:20.350 "runtime": 10.019634, 00:17:20.350 "iops": 4631.007479913937, 00:17:20.350 "mibps": 18.089872968413815, 00:17:20.350 "io_failed": 0, 00:17:20.350 "io_timeout": 0, 00:17:20.350 "avg_latency_us": 27581.90198377386, 00:17:20.350 "min_latency_us": 6285.498181818181, 00:17:20.350 "max_latency_us": 32648.843636363636 00:17:20.350 } 00:17:20.350 ], 00:17:20.350 "core_count": 1 00:17:20.350 } 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:20.610 nvmf_trace.0 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73357 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73357 ']' 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73357 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73357 00:17:20.610 killing process with pid 73357 00:17:20.610 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.610 00:17:20.610 Latency(us) 00:17:20.610 [2024-12-09T04:07:02.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.610 [2024-12-09T04:07:02.560Z] =================================================================================================================== 00:17:20.610 [2024-12-09T04:07:02.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73357' 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73357 00:17:20.610 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73357 00:17:20.868 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:20.868 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:20.868 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:20.868 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.868 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:20.868 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.868 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.868 rmmod nvme_tcp 00:17:20.868 rmmod nvme_fabrics 00:17:20.868 rmmod nvme_keyring 00:17:20.868 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73314 ']' 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73314 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73314 ']' 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73314 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73314 00:17:21.127 killing process with pid 73314 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73314' 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73314 00:17:21.127 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73314 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:21.386 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.nmJ 00:17:21.645 ************************************ 00:17:21.645 END TEST nvmf_fips 00:17:21.645 ************************************ 00:17:21.645 00:17:21.645 real 0m15.111s 00:17:21.645 user 0m21.229s 00:17:21.645 sys 0m5.648s 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:21.645 ************************************ 00:17:21.645 START TEST nvmf_control_msg_list 00:17:21.645 ************************************ 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:21.645 * Looking for test storage... 00:17:21.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:17:21.645 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.906 --rc genhtml_branch_coverage=1 00:17:21.906 --rc genhtml_function_coverage=1 00:17:21.906 --rc genhtml_legend=1 00:17:21.906 --rc geninfo_all_blocks=1 00:17:21.906 --rc geninfo_unexecuted_blocks=1 00:17:21.906 00:17:21.906 ' 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.906 --rc genhtml_branch_coverage=1 00:17:21.906 --rc genhtml_function_coverage=1 00:17:21.906 --rc genhtml_legend=1 00:17:21.906 --rc geninfo_all_blocks=1 00:17:21.906 --rc geninfo_unexecuted_blocks=1 00:17:21.906 00:17:21.906 ' 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.906 --rc genhtml_branch_coverage=1 00:17:21.906 --rc genhtml_function_coverage=1 00:17:21.906 --rc genhtml_legend=1 00:17:21.906 --rc geninfo_all_blocks=1 00:17:21.906 --rc geninfo_unexecuted_blocks=1 00:17:21.906 00:17:21.906 ' 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.906 --rc genhtml_branch_coverage=1 00:17:21.906 --rc genhtml_function_coverage=1 00:17:21.906 --rc genhtml_legend=1 00:17:21.906 --rc geninfo_all_blocks=1 00:17:21.906 --rc geninfo_unexecuted_blocks=1 00:17:21.906 00:17:21.906 ' 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.906 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.907 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:21.907 Cannot find device "nvmf_init_br" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:21.907 Cannot find device "nvmf_init_br2" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:21.907 Cannot find device "nvmf_tgt_br" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.907 Cannot find device "nvmf_tgt_br2" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:21.907 Cannot find device "nvmf_init_br" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:21.907 Cannot find device "nvmf_init_br2" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:21.907 Cannot find device "nvmf_tgt_br" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:21.907 Cannot find device "nvmf_tgt_br2" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:21.907 Cannot find device "nvmf_br" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:21.907 Cannot find device "nvmf_init_if" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:21.907 Cannot find device "nvmf_init_if2" 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:21.907 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:22.167 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:22.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:22.167 00:17:22.167 --- 10.0.0.3 ping statistics --- 00:17:22.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.167 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:22.167 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:22.167 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:22.167 00:17:22.167 --- 10.0.0.4 ping statistics --- 00:17:22.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.167 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:22.167 00:17:22.167 --- 10.0.0.1 ping statistics --- 00:17:22.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.167 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:22.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:22.167 00:17:22.167 --- 10.0.0.2 ping statistics --- 00:17:22.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.167 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.167 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.426 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73739 00:17:22.427 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:22.427 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73739 00:17:22.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.427 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73739 ']' 00:17:22.427 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.427 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.427 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.427 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.427 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.427 [2024-12-09 04:07:04.183398] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:22.427 [2024-12-09 04:07:04.183720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.427 [2024-12-09 04:07:04.338805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.685 [2024-12-09 04:07:04.407341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.685 [2024-12-09 04:07:04.407411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.685 [2024-12-09 04:07:04.407425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.685 [2024-12-09 04:07:04.407436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.685 [2024-12-09 04:07:04.407445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.685 [2024-12-09 04:07:04.407955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.685 [2024-12-09 04:07:04.486972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:22.685 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.685 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:22.685 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:22.685 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:22.685 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.685 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.685 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:22.685 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:22.686 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:22.686 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.686 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.686 [2024-12-09 04:07:04.625825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.686 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.686 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:22.686 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.686 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.944 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.944 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:22.944 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.945 Malloc0 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:22.945 [2024-12-09 04:07:04.669491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73764 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73765 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73766 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:22.945 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73764 00:17:22.945 [2024-12-09 04:07:04.868077] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:22.945 [2024-12-09 04:07:04.868706] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:22.945 [2024-12-09 04:07:04.878203] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:24.322 Initializing NVMe Controllers 00:17:24.322 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:24.322 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:24.322 Initialization complete. Launching workers. 00:17:24.322 ======================================================== 00:17:24.322 Latency(us) 00:17:24.322 Device Information : IOPS MiB/s Average min max 00:17:24.322 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3514.00 13.73 284.25 177.75 885.06 00:17:24.322 ======================================================== 00:17:24.322 Total : 3514.00 13.73 284.25 177.75 885.06 00:17:24.322 00:17:24.322 Initializing NVMe Controllers 00:17:24.322 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:24.322 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:24.322 Initialization complete. Launching workers. 00:17:24.322 ======================================================== 00:17:24.322 Latency(us) 00:17:24.322 Device Information : IOPS MiB/s Average min max 00:17:24.322 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3512.00 13.72 284.35 185.90 1677.53 00:17:24.322 ======================================================== 00:17:24.322 Total : 3512.00 13.72 284.35 185.90 1677.53 00:17:24.322 00:17:24.322 Initializing NVMe Controllers 00:17:24.322 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:24.322 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:24.322 Initialization complete. Launching workers. 00:17:24.322 ======================================================== 00:17:24.322 Latency(us) 00:17:24.322 Device Information : IOPS MiB/s Average min max 00:17:24.322 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3549.98 13.87 281.28 112.44 885.48 00:17:24.322 ======================================================== 00:17:24.322 Total : 3549.98 13.87 281.28 112.44 885.48 00:17:24.322 00:17:24.322 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73765 00:17:24.322 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73766 00:17:24.323 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:24.323 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:24.323 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:24.323 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:24.323 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:24.323 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:24.323 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:24.323 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:24.323 rmmod nvme_tcp 00:17:24.323 rmmod nvme_fabrics 00:17:24.323 rmmod nvme_keyring 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73739 ']' 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73739 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73739 ']' 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73739 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73739 00:17:24.323 killing process with pid 73739 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73739' 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73739 00:17:24.323 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73739 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:24.582 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:24.583 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:24.583 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:24.583 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:24.583 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:24.842 00:17:24.842 real 0m3.166s 00:17:24.842 user 0m4.946s 00:17:24.842 sys 0m1.422s 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 ************************************ 00:17:24.842 END TEST nvmf_control_msg_list 00:17:24.842 ************************************ 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.842 04:07:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 ************************************ 00:17:24.843 START TEST nvmf_wait_for_buf 00:17:24.843 ************************************ 00:17:24.843 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:24.843 * Looking for test storage... 00:17:24.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:24.843 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:24.843 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:24.843 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:25.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.103 --rc genhtml_branch_coverage=1 00:17:25.103 --rc genhtml_function_coverage=1 00:17:25.103 --rc genhtml_legend=1 00:17:25.103 --rc geninfo_all_blocks=1 00:17:25.103 --rc geninfo_unexecuted_blocks=1 00:17:25.103 00:17:25.103 ' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:25.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.103 --rc genhtml_branch_coverage=1 00:17:25.103 --rc genhtml_function_coverage=1 00:17:25.103 --rc genhtml_legend=1 00:17:25.103 --rc geninfo_all_blocks=1 00:17:25.103 --rc geninfo_unexecuted_blocks=1 00:17:25.103 00:17:25.103 ' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:25.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.103 --rc genhtml_branch_coverage=1 00:17:25.103 --rc genhtml_function_coverage=1 00:17:25.103 --rc genhtml_legend=1 00:17:25.103 --rc geninfo_all_blocks=1 00:17:25.103 --rc geninfo_unexecuted_blocks=1 00:17:25.103 00:17:25.103 ' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:25.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.103 --rc genhtml_branch_coverage=1 00:17:25.103 --rc genhtml_function_coverage=1 00:17:25.103 --rc genhtml_legend=1 00:17:25.103 --rc geninfo_all_blocks=1 00:17:25.103 --rc geninfo_unexecuted_blocks=1 00:17:25.103 00:17:25.103 ' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.103 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.103 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:25.104 Cannot find device "nvmf_init_br" 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:25.104 Cannot find device "nvmf_init_br2" 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:25.104 Cannot find device "nvmf_tgt_br" 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:25.104 Cannot find device "nvmf_tgt_br2" 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:25.104 Cannot find device "nvmf_init_br" 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:25.104 Cannot find device "nvmf_init_br2" 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:25.104 Cannot find device "nvmf_tgt_br" 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:25.104 Cannot find device "nvmf_tgt_br2" 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:25.104 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:25.104 Cannot find device "nvmf_br" 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:25.104 Cannot find device "nvmf_init_if" 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:25.104 Cannot find device "nvmf_init_if2" 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:25.104 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:25.364 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:25.364 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:17:25.364 00:17:25.364 --- 10.0.0.3 ping statistics --- 00:17:25.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.364 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:25.364 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:25.364 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:17:25.364 00:17:25.364 --- 10.0.0.4 ping statistics --- 00:17:25.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.364 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:25.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:25.364 00:17:25.364 --- 10.0.0.1 ping statistics --- 00:17:25.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.364 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:25.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:25.364 00:17:25.364 --- 10.0.0.2 ping statistics --- 00:17:25.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.364 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:25.364 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=74005 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 74005 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 74005 ']' 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:25.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.624 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:25.624 [2024-12-09 04:07:07.397476] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:25.624 [2024-12-09 04:07:07.397578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.624 [2024-12-09 04:07:07.547171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.884 [2024-12-09 04:07:07.602726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.884 [2024-12-09 04:07:07.602800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.884 [2024-12-09 04:07:07.602811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.884 [2024-12-09 04:07:07.602819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.884 [2024-12-09 04:07:07.602830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.884 [2024-12-09 04:07:07.603252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:25.884 [2024-12-09 04:07:07.768552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.884 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.142 Malloc0 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.143 [2024-12-09 04:07:07.849126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:26.143 [2024-12-09 04:07:07.873289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.143 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:26.143 [2024-12-09 04:07:08.070297] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:27.520 Initializing NVMe Controllers 00:17:27.520 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:27.520 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:27.520 Initialization complete. Launching workers. 00:17:27.520 ======================================================== 00:17:27.520 Latency(us) 00:17:27.520 Device Information : IOPS MiB/s Average min max 00:17:27.520 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 490.48 61.31 8154.91 7002.91 16050.53 00:17:27.520 ======================================================== 00:17:27.520 Total : 490.48 61.31 8154.91 7002.91 16050.53 00:17:27.520 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4674 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4674 -eq 0 ]] 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.520 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.789 rmmod nvme_tcp 00:17:27.789 rmmod nvme_fabrics 00:17:27.789 rmmod nvme_keyring 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 74005 ']' 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 74005 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 74005 ']' 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 74005 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74005 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.789 killing process with pid 74005 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74005' 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 74005 00:17:27.789 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 74005 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:28.065 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:28.065 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:28.324 00:17:28.324 real 0m3.408s 00:17:28.324 user 0m2.690s 00:17:28.324 sys 0m0.862s 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:28.324 ************************************ 00:17:28.324 END TEST nvmf_wait_for_buf 00:17:28.324 ************************************ 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.324 ************************************ 00:17:28.324 START TEST nvmf_nsid 00:17:28.324 ************************************ 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:28.324 * Looking for test storage... 00:17:28.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:28.324 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:28.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.583 --rc genhtml_branch_coverage=1 00:17:28.583 --rc genhtml_function_coverage=1 00:17:28.583 --rc genhtml_legend=1 00:17:28.583 --rc geninfo_all_blocks=1 00:17:28.583 --rc geninfo_unexecuted_blocks=1 00:17:28.583 00:17:28.583 ' 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:28.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.583 --rc genhtml_branch_coverage=1 00:17:28.583 --rc genhtml_function_coverage=1 00:17:28.583 --rc genhtml_legend=1 00:17:28.583 --rc geninfo_all_blocks=1 00:17:28.583 --rc geninfo_unexecuted_blocks=1 00:17:28.583 00:17:28.583 ' 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:28.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.583 --rc genhtml_branch_coverage=1 00:17:28.583 --rc genhtml_function_coverage=1 00:17:28.583 --rc genhtml_legend=1 00:17:28.583 --rc geninfo_all_blocks=1 00:17:28.583 --rc geninfo_unexecuted_blocks=1 00:17:28.583 00:17:28.583 ' 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:28.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.583 --rc genhtml_branch_coverage=1 00:17:28.583 --rc genhtml_function_coverage=1 00:17:28.583 --rc genhtml_legend=1 00:17:28.583 --rc geninfo_all_blocks=1 00:17:28.583 --rc geninfo_unexecuted_blocks=1 00:17:28.583 00:17:28.583 ' 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.583 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.584 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:28.584 Cannot find device "nvmf_init_br" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:28.584 Cannot find device "nvmf_init_br2" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:28.584 Cannot find device "nvmf_tgt_br" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.584 Cannot find device "nvmf_tgt_br2" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:28.584 Cannot find device "nvmf_init_br" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:28.584 Cannot find device "nvmf_init_br2" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:28.584 Cannot find device "nvmf_tgt_br" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:28.584 Cannot find device "nvmf_tgt_br2" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:28.584 Cannot find device "nvmf_br" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:28.584 Cannot find device "nvmf_init_if" 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:28.584 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:28.584 Cannot find device "nvmf_init_if2" 00:17:28.585 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:28.585 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.585 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:28.585 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.585 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:28.585 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.585 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.585 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.856 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:28.857 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.857 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:17:28.857 00:17:28.857 --- 10.0.0.3 ping statistics --- 00:17:28.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.857 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:28.857 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:28.857 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:28.857 00:17:28.857 --- 10.0.0.4 ping statistics --- 00:17:28.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.857 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:28.857 00:17:28.857 --- 10.0.0.1 ping statistics --- 00:17:28.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.857 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:28.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:28.857 00:17:28.857 --- 10.0.0.2 ping statistics --- 00:17:28.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.857 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74265 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74265 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74265 ']' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.857 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:29.115 [2024-12-09 04:07:10.839891] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:29.115 [2024-12-09 04:07:10.839956] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.115 [2024-12-09 04:07:10.990860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.115 [2024-12-09 04:07:11.058896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.115 [2024-12-09 04:07:11.058977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.115 [2024-12-09 04:07:11.059000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.115 [2024-12-09 04:07:11.059011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.115 [2024-12-09 04:07:11.059020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.115 [2024-12-09 04:07:11.059559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.373 [2024-12-09 04:07:11.139615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74294 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=31eaaec3-1f89-4e10-b415-8f93bbb82699 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=41a12994-1941-4cca-975a-78defffbdc2c 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6944a072-488c-44dd-91bb-890bca5cdb98 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.373 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:29.373 null0 00:17:29.373 null1 00:17:29.373 null2 00:17:29.631 [2024-12-09 04:07:11.325077] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.631 [2024-12-09 04:07:11.335662] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:29.631 [2024-12-09 04:07:11.335744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74294 ] 00:17:29.631 [2024-12-09 04:07:11.349259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:29.631 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.631 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74294 /var/tmp/tgt2.sock 00:17:29.631 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74294 ']' 00:17:29.631 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:29.631 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:29.631 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:29.631 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.631 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:29.631 [2024-12-09 04:07:11.485450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.631 [2024-12-09 04:07:11.564625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.889 [2024-12-09 04:07:11.669092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.147 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.147 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:30.147 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:30.732 [2024-12-09 04:07:12.360923] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.732 [2024-12-09 04:07:12.376998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:30.732 nvme0n1 nvme0n2 00:17:30.732 nvme1n1 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:30.732 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 31eaaec3-1f89-4e10-b415-8f93bbb82699 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:31.669 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=31eaaec31f894e10b4158f93bbb82699 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 31EAAEC31F894E10B4158F93BBB82699 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 31EAAEC31F894E10B4158F93BBB82699 == \3\1\E\A\A\E\C\3\1\F\8\9\4\E\1\0\B\4\1\5\8\F\9\3\B\B\B\8\2\6\9\9 ]] 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 41a12994-1941-4cca-975a-78defffbdc2c 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=41a1299419414cca975a78defffbdc2c 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 41A1299419414CCA975A78DEFFFBDC2C 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 41A1299419414CCA975A78DEFFFBDC2C == \4\1\A\1\2\9\9\4\1\9\4\1\4\C\C\A\9\7\5\A\7\8\D\E\F\F\F\B\D\C\2\C ]] 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6944a072-488c-44dd-91bb-890bca5cdb98 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6944a072488c44dd91bb890bca5cdb98 00:17:31.929 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6944A072488C44DD91BB890BCA5CDB98 00:17:31.930 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6944A072488C44DD91BB890BCA5CDB98 == \6\9\4\4\A\0\7\2\4\8\8\C\4\4\D\D\9\1\B\B\8\9\0\B\C\A\5\C\D\B\9\8 ]] 00:17:31.930 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:32.200 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:32.200 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:32.200 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74294 00:17:32.200 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74294 ']' 00:17:32.200 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74294 00:17:32.200 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:32.200 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.200 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74294 00:17:32.200 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:32.200 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:32.200 killing process with pid 74294 00:17:32.200 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74294' 00:17:32.200 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74294 00:17:32.200 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74294 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.784 rmmod nvme_tcp 00:17:32.784 rmmod nvme_fabrics 00:17:32.784 rmmod nvme_keyring 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74265 ']' 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74265 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74265 ']' 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74265 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:32.784 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.785 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74265 00:17:32.785 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.785 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.785 killing process with pid 74265 00:17:32.785 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74265' 00:17:32.785 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74265 00:17:32.785 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74265 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:33.044 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:33.303 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:33.303 00:17:33.303 real 0m5.066s 00:17:33.303 user 0m7.442s 00:17:33.303 sys 0m1.819s 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 ************************************ 00:17:33.303 END TEST nvmf_nsid 00:17:33.303 ************************************ 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:33.303 00:17:33.303 real 5m14.610s 00:17:33.303 user 10m58.746s 00:17:33.303 sys 1m10.502s 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.303 ************************************ 00:17:33.303 04:07:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.303 END TEST nvmf_target_extra 00:17:33.303 ************************************ 00:17:33.562 04:07:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:33.562 04:07:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:33.562 04:07:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.562 04:07:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.562 ************************************ 00:17:33.562 START TEST nvmf_host 00:17:33.562 ************************************ 00:17:33.562 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:33.562 * Looking for test storage... 00:17:33.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:33.562 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:33.562 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:33.562 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.821 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:33.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.822 --rc genhtml_branch_coverage=1 00:17:33.822 --rc genhtml_function_coverage=1 00:17:33.822 --rc genhtml_legend=1 00:17:33.822 --rc geninfo_all_blocks=1 00:17:33.822 --rc geninfo_unexecuted_blocks=1 00:17:33.822 00:17:33.822 ' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:33.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.822 --rc genhtml_branch_coverage=1 00:17:33.822 --rc genhtml_function_coverage=1 00:17:33.822 --rc genhtml_legend=1 00:17:33.822 --rc geninfo_all_blocks=1 00:17:33.822 --rc geninfo_unexecuted_blocks=1 00:17:33.822 00:17:33.822 ' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:33.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.822 --rc genhtml_branch_coverage=1 00:17:33.822 --rc genhtml_function_coverage=1 00:17:33.822 --rc genhtml_legend=1 00:17:33.822 --rc geninfo_all_blocks=1 00:17:33.822 --rc geninfo_unexecuted_blocks=1 00:17:33.822 00:17:33.822 ' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:33.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.822 --rc genhtml_branch_coverage=1 00:17:33.822 --rc genhtml_function_coverage=1 00:17:33.822 --rc genhtml_legend=1 00:17:33.822 --rc geninfo_all_blocks=1 00:17:33.822 --rc geninfo_unexecuted_blocks=1 00:17:33.822 00:17:33.822 ' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.822 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.822 04:07:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 ************************************ 00:17:33.823 START TEST nvmf_identify 00:17:33.823 ************************************ 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:33.823 * Looking for test storage... 00:17:33.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.823 --rc genhtml_branch_coverage=1 00:17:33.823 --rc genhtml_function_coverage=1 00:17:33.823 --rc genhtml_legend=1 00:17:33.823 --rc geninfo_all_blocks=1 00:17:33.823 --rc geninfo_unexecuted_blocks=1 00:17:33.823 00:17:33.823 ' 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.823 --rc genhtml_branch_coverage=1 00:17:33.823 --rc genhtml_function_coverage=1 00:17:33.823 --rc genhtml_legend=1 00:17:33.823 --rc geninfo_all_blocks=1 00:17:33.823 --rc geninfo_unexecuted_blocks=1 00:17:33.823 00:17:33.823 ' 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.823 --rc genhtml_branch_coverage=1 00:17:33.823 --rc genhtml_function_coverage=1 00:17:33.823 --rc genhtml_legend=1 00:17:33.823 --rc geninfo_all_blocks=1 00:17:33.823 --rc geninfo_unexecuted_blocks=1 00:17:33.823 00:17:33.823 ' 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.823 --rc genhtml_branch_coverage=1 00:17:33.823 --rc genhtml_function_coverage=1 00:17:33.823 --rc genhtml_legend=1 00:17:33.823 --rc geninfo_all_blocks=1 00:17:33.823 --rc geninfo_unexecuted_blocks=1 00:17:33.823 00:17:33.823 ' 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.823 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.083 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.084 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:34.084 Cannot find device "nvmf_init_br" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:34.084 Cannot find device "nvmf_init_br2" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:34.084 Cannot find device "nvmf_tgt_br" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.084 Cannot find device "nvmf_tgt_br2" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:34.084 Cannot find device "nvmf_init_br" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:34.084 Cannot find device "nvmf_init_br2" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:34.084 Cannot find device "nvmf_tgt_br" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:34.084 Cannot find device "nvmf_tgt_br2" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:34.084 Cannot find device "nvmf_br" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:34.084 Cannot find device "nvmf_init_if" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:34.084 Cannot find device "nvmf_init_if2" 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.084 04:07:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.084 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.084 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:34.084 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:34.084 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:34.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:17:34.344 00:17:34.344 --- 10.0.0.3 ping statistics --- 00:17:34.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.344 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:34.344 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:34.344 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:17:34.344 00:17:34.344 --- 10.0.0.4 ping statistics --- 00:17:34.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.344 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:34.344 00:17:34.344 --- 10.0.0.1 ping statistics --- 00:17:34.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.344 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:34.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:34.344 00:17:34.344 --- 10.0.0.2 ping statistics --- 00:17:34.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.344 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74652 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74652 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74652 ']' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.344 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.344 [2024-12-09 04:07:16.272215] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:34.344 [2024-12-09 04:07:16.272317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.604 [2024-12-09 04:07:16.425542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.604 [2024-12-09 04:07:16.497018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.604 [2024-12-09 04:07:16.497093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.604 [2024-12-09 04:07:16.497112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.604 [2024-12-09 04:07:16.497123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.604 [2024-12-09 04:07:16.497133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.604 [2024-12-09 04:07:16.498747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.604 [2024-12-09 04:07:16.498913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.604 [2024-12-09 04:07:16.499040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.604 [2024-12-09 04:07:16.499040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.862 [2024-12-09 04:07:16.580269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.862 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.862 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:34.862 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.862 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.862 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.862 [2024-12-09 04:07:16.677470] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.862 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.863 Malloc0 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.863 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:35.136 [2024-12-09 04:07:16.818263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:35.136 [ 00:17:35.136 { 00:17:35.136 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:35.136 "subtype": "Discovery", 00:17:35.136 "listen_addresses": [ 00:17:35.136 { 00:17:35.136 "trtype": "TCP", 00:17:35.136 "adrfam": "IPv4", 00:17:35.136 "traddr": "10.0.0.3", 00:17:35.136 "trsvcid": "4420" 00:17:35.136 } 00:17:35.136 ], 00:17:35.136 "allow_any_host": true, 00:17:35.136 "hosts": [] 00:17:35.136 }, 00:17:35.136 { 00:17:35.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.136 "subtype": "NVMe", 00:17:35.136 "listen_addresses": [ 00:17:35.136 { 00:17:35.136 "trtype": "TCP", 00:17:35.136 "adrfam": "IPv4", 00:17:35.136 "traddr": "10.0.0.3", 00:17:35.136 "trsvcid": "4420" 00:17:35.136 } 00:17:35.136 ], 00:17:35.136 "allow_any_host": true, 00:17:35.136 "hosts": [], 00:17:35.136 "serial_number": "SPDK00000000000001", 00:17:35.136 "model_number": "SPDK bdev Controller", 00:17:35.136 "max_namespaces": 32, 00:17:35.136 "min_cntlid": 1, 00:17:35.136 "max_cntlid": 65519, 00:17:35.136 "namespaces": [ 00:17:35.136 { 00:17:35.136 "nsid": 1, 00:17:35.136 "bdev_name": "Malloc0", 00:17:35.136 "name": "Malloc0", 00:17:35.136 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:35.136 "eui64": "ABCDEF0123456789", 00:17:35.136 "uuid": "3bc98fb1-3b30-45f1-a467-918e919f5035" 00:17:35.136 } 00:17:35.136 ] 00:17:35.136 } 00:17:35.136 ] 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.136 04:07:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:35.136 [2024-12-09 04:07:16.884507] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:35.136 [2024-12-09 04:07:16.884549] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74674 ] 00:17:35.136 [2024-12-09 04:07:17.042506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:35.136 [2024-12-09 04:07:17.042595] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:35.136 [2024-12-09 04:07:17.042602] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:35.136 [2024-12-09 04:07:17.042616] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:35.136 [2024-12-09 04:07:17.042628] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:35.136 [2024-12-09 04:07:17.042991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:35.136 [2024-12-09 04:07:17.043061] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12dc750 0 00:17:35.136 [2024-12-09 04:07:17.050264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:35.136 [2024-12-09 04:07:17.050292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:35.136 [2024-12-09 04:07:17.050315] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:35.136 [2024-12-09 04:07:17.050319] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:35.136 [2024-12-09 04:07:17.050356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.050364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.050369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.136 [2024-12-09 04:07:17.050384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:35.136 [2024-12-09 04:07:17.050417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.136 [2024-12-09 04:07:17.058234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.136 [2024-12-09 04:07:17.058258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.136 [2024-12-09 04:07:17.058280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.136 [2024-12-09 04:07:17.058298] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:35.136 [2024-12-09 04:07:17.058306] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:35.136 [2024-12-09 04:07:17.058312] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:35.136 [2024-12-09 04:07:17.058331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.136 [2024-12-09 04:07:17.058350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.136 [2024-12-09 04:07:17.058376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.136 [2024-12-09 04:07:17.058442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.136 [2024-12-09 04:07:17.058449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.136 [2024-12-09 04:07:17.058452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.136 [2024-12-09 04:07:17.058478] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:35.136 [2024-12-09 04:07:17.058502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:35.136 [2024-12-09 04:07:17.058510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.136 [2024-12-09 04:07:17.058526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.136 [2024-12-09 04:07:17.058546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.136 [2024-12-09 04:07:17.058595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.136 [2024-12-09 04:07:17.058602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.136 [2024-12-09 04:07:17.058606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.136 [2024-12-09 04:07:17.058617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:35.136 [2024-12-09 04:07:17.058626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:35.136 [2024-12-09 04:07:17.058633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.136 [2024-12-09 04:07:17.058649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.136 [2024-12-09 04:07:17.058683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.136 [2024-12-09 04:07:17.058724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.136 [2024-12-09 04:07:17.058731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.136 [2024-12-09 04:07:17.058735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.136 [2024-12-09 04:07:17.058745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:35.136 [2024-12-09 04:07:17.058756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.136 [2024-12-09 04:07:17.058764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.136 [2024-12-09 04:07:17.058772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.136 [2024-12-09 04:07:17.058789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.136 [2024-12-09 04:07:17.058838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.136 [2024-12-09 04:07:17.058846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.136 [2024-12-09 04:07:17.058849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.058854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.137 [2024-12-09 04:07:17.058859] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:35.137 [2024-12-09 04:07:17.058865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:35.137 [2024-12-09 04:07:17.058873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:35.137 [2024-12-09 04:07:17.058984] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:35.137 [2024-12-09 04:07:17.059001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:35.137 [2024-12-09 04:07:17.059027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-12-09 04:07:17.059065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.137 [2024-12-09 04:07:17.059109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.137 [2024-12-09 04:07:17.059116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.137 [2024-12-09 04:07:17.059119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.137 [2024-12-09 04:07:17.059129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:35.137 [2024-12-09 04:07:17.059139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-12-09 04:07:17.059172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.137 [2024-12-09 04:07:17.059232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.137 [2024-12-09 04:07:17.059242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.137 [2024-12-09 04:07:17.059246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.137 [2024-12-09 04:07:17.059256] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:35.137 [2024-12-09 04:07:17.059261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:35.137 [2024-12-09 04:07:17.059270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:35.137 [2024-12-09 04:07:17.059281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:35.137 [2024-12-09 04:07:17.059293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-12-09 04:07:17.059328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.137 [2024-12-09 04:07:17.059425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.137 [2024-12-09 04:07:17.059433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.137 [2024-12-09 04:07:17.059437] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059441] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dc750): datao=0, datal=4096, cccid=0 00:17:35.137 [2024-12-09 04:07:17.059446] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1340740) on tqpair(0x12dc750): expected_datao=0, payload_size=4096 00:17:35.137 [2024-12-09 04:07:17.059452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059460] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059465] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.137 [2024-12-09 04:07:17.059481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.137 [2024-12-09 04:07:17.059484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.137 [2024-12-09 04:07:17.059499] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:35.137 [2024-12-09 04:07:17.059505] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:35.137 [2024-12-09 04:07:17.059510] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:35.137 [2024-12-09 04:07:17.059516] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:35.137 [2024-12-09 04:07:17.059521] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:35.137 [2024-12-09 04:07:17.059527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:35.137 [2024-12-09 04:07:17.059536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:35.137 [2024-12-09 04:07:17.059545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.137 [2024-12-09 04:07:17.059582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.137 [2024-12-09 04:07:17.059639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.137 [2024-12-09 04:07:17.059646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.137 [2024-12-09 04:07:17.059650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.137 [2024-12-09 04:07:17.059671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.137 [2024-12-09 04:07:17.059695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.137 [2024-12-09 04:07:17.059715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.137 [2024-12-09 04:07:17.059735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.137 [2024-12-09 04:07:17.059770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:35.137 [2024-12-09 04:07:17.059779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:35.137 [2024-12-09 04:07:17.059786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.137 [2024-12-09 04:07:17.059818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340740, cid 0, qid 0 00:17:35.137 [2024-12-09 04:07:17.059825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13408c0, cid 1, qid 0 00:17:35.137 [2024-12-09 04:07:17.059830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340a40, cid 2, qid 0 00:17:35.137 [2024-12-09 04:07:17.059834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.137 [2024-12-09 04:07:17.059839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340d40, cid 4, qid 0 00:17:35.137 [2024-12-09 04:07:17.059922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.137 [2024-12-09 04:07:17.059929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.137 [2024-12-09 04:07:17.059932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340d40) on tqpair=0x12dc750 00:17:35.137 [2024-12-09 04:07:17.059943] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:35.137 [2024-12-09 04:07:17.059952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:35.137 [2024-12-09 04:07:17.059964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.137 [2024-12-09 04:07:17.059969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dc750) 00:17:35.137 [2024-12-09 04:07:17.059977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-12-09 04:07:17.059995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340d40, cid 4, qid 0 00:17:35.138 [2024-12-09 04:07:17.060051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.138 [2024-12-09 04:07:17.060058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.138 [2024-12-09 04:07:17.060062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dc750): datao=0, datal=4096, cccid=4 00:17:35.138 [2024-12-09 04:07:17.060070] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1340d40) on tqpair(0x12dc750): expected_datao=0, payload_size=4096 00:17:35.138 [2024-12-09 04:07:17.060075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060082] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060086] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.138 [2024-12-09 04:07:17.060100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.138 [2024-12-09 04:07:17.060104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340d40) on tqpair=0x12dc750 00:17:35.138 [2024-12-09 04:07:17.060122] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:35.138 [2024-12-09 04:07:17.060153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dc750) 00:17:35.138 [2024-12-09 04:07:17.060166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-12-09 04:07:17.060174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12dc750) 00:17:35.138 [2024-12-09 04:07:17.060205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.138 [2024-12-09 04:07:17.060233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340d40, cid 4, qid 0 00:17:35.138 [2024-12-09 04:07:17.060240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340ec0, cid 5, qid 0 00:17:35.138 [2024-12-09 04:07:17.060342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.138 [2024-12-09 04:07:17.060349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.138 [2024-12-09 04:07:17.060353] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060357] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dc750): datao=0, datal=1024, cccid=4 00:17:35.138 [2024-12-09 04:07:17.060362] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1340d40) on tqpair(0x12dc750): expected_datao=0, payload_size=1024 00:17:35.138 [2024-12-09 04:07:17.060366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060373] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060377] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.138 [2024-12-09 04:07:17.060389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.138 [2024-12-09 04:07:17.060392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340ec0) on tqpair=0x12dc750 00:17:35.138 [2024-12-09 04:07:17.060413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.138 [2024-12-09 04:07:17.060421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.138 [2024-12-09 04:07:17.060424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340d40) on tqpair=0x12dc750 00:17:35.138 [2024-12-09 04:07:17.060441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dc750) 00:17:35.138 [2024-12-09 04:07:17.060455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-12-09 04:07:17.060479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340d40, cid 4, qid 0 00:17:35.138 [2024-12-09 04:07:17.060544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.138 [2024-12-09 04:07:17.060551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.138 [2024-12-09 04:07:17.060555] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dc750): datao=0, datal=3072, cccid=4 00:17:35.138 [2024-12-09 04:07:17.060563] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1340d40) on tqpair(0x12dc750): expected_datao=0, payload_size=3072 00:17:35.138 [2024-12-09 04:07:17.060568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060575] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060579] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.138 [2024-12-09 04:07:17.060593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.138 [2024-12-09 04:07:17.060596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340d40) on tqpair=0x12dc750 00:17:35.138 [2024-12-09 04:07:17.060610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12dc750) 00:17:35.138 [2024-12-09 04:07:17.060622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.138 [2024-12-09 04:07:17.060645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340d40, cid 4, qid 0 00:17:35.138 [2024-12-09 04:07:17.060706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.138 [2024-12-09 04:07:17.060713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.138 [2024-12-09 04:07:17.060717] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060720] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12dc750): datao=0, datal=8, cccid=4 00:17:35.138 [2024-12-09 04:07:17.060725] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1340d40) on tqpair(0x12dc750): expected_datao=0, payload_size=8 00:17:35.138 [2024-12-09 04:07:17.060729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060736] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.138 [2024-12-09 04:07:17.060740] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.138 ===================================================== 00:17:35.138 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:35.138 ===================================================== 00:17:35.138 Controller Capabilities/Features 00:17:35.138 ================================ 00:17:35.138 Vendor ID: 0000 00:17:35.138 Subsystem Vendor ID: 0000 00:17:35.138 Serial Number: .................... 00:17:35.138 Model Number: ........................................ 00:17:35.138 Firmware Version: 25.01 00:17:35.138 Recommended Arb Burst: 0 00:17:35.138 IEEE OUI Identifier: 00 00 00 00:17:35.138 Multi-path I/O 00:17:35.138 May have multiple subsystem ports: No 00:17:35.138 May have multiple controllers: No 00:17:35.138 Associated with SR-IOV VF: No 00:17:35.138 Max Data Transfer Size: 131072 00:17:35.138 Max Number of Namespaces: 0 00:17:35.138 Max Number of I/O Queues: 1024 00:17:35.138 NVMe Specification Version (VS): 1.3 00:17:35.138 NVMe Specification Version (Identify): 1.3 00:17:35.138 Maximum Queue Entries: 128 00:17:35.138 Contiguous Queues Required: Yes 00:17:35.138 Arbitration Mechanisms Supported 00:17:35.138 Weighted Round Robin: Not Supported 00:17:35.138 Vendor Specific: Not Supported 00:17:35.138 Reset Timeout: 15000 ms 00:17:35.138 Doorbell Stride: 4 bytes 00:17:35.138 NVM Subsystem Reset: Not Supported 00:17:35.138 Command Sets Supported 00:17:35.138 NVM Command Set: Supported 00:17:35.138 Boot Partition: Not Supported 00:17:35.138 Memory Page Size Minimum: 4096 bytes 00:17:35.138 Memory Page Size Maximum: 4096 bytes 00:17:35.138 Persistent Memory Region: Not Supported 00:17:35.138 Optional Asynchronous Events Supported 00:17:35.138 Namespace Attribute Notices: Not Supported 00:17:35.138 Firmware Activation Notices: Not Supported 00:17:35.138 ANA Change Notices: Not Supported 00:17:35.138 PLE Aggregate Log Change Notices: Not Supported 00:17:35.138 LBA Status Info Alert Notices: Not Supported 00:17:35.138 EGE Aggregate Log Change Notices: Not Supported 00:17:35.138 Normal NVM Subsystem Shutdown event: Not Supported 00:17:35.138 Zone Descriptor Change Notices: Not Supported 00:17:35.138 Discovery Log Change Notices: Supported 00:17:35.138 Controller Attributes 00:17:35.138 128-bit Host Identifier: Not Supported 00:17:35.138 Non-Operational Permissive Mode: Not Supported 00:17:35.138 NVM Sets: Not Supported 00:17:35.138 Read Recovery Levels: Not Supported 00:17:35.138 Endurance Groups: Not Supported 00:17:35.138 Predictable Latency Mode: Not Supported 00:17:35.138 Traffic Based Keep ALive: Not Supported 00:17:35.138 Namespace Granularity: Not Supported 00:17:35.138 SQ Associations: Not Supported 00:17:35.138 UUID List: Not Supported 00:17:35.138 Multi-Domain Subsystem: Not Supported 00:17:35.138 Fixed Capacity Management: Not Supported 00:17:35.138 Variable Capacity Management: Not Supported 00:17:35.138 Delete Endurance Group: Not Supported 00:17:35.138 Delete NVM Set: Not Supported 00:17:35.138 Extended LBA Formats Supported: Not Supported 00:17:35.138 Flexible Data Placement Supported: Not Supported 00:17:35.138 00:17:35.138 Controller Memory Buffer Support 00:17:35.138 ================================ 00:17:35.139 Supported: No 00:17:35.139 00:17:35.139 Persistent Memory Region Support 00:17:35.139 ================================ 00:17:35.139 Supported: No 00:17:35.139 00:17:35.139 Admin Command Set Attributes 00:17:35.139 ============================ 00:17:35.139 Security Send/Receive: Not Supported 00:17:35.139 Format NVM: Not Supported 00:17:35.139 Firmware Activate/Download: Not Supported 00:17:35.139 Namespace Management: Not Supported 00:17:35.139 Device Self-Test: Not Supported 00:17:35.139 Directives: Not Supported 00:17:35.139 NVMe-MI: Not Supported 00:17:35.139 Virtualization Management: Not Supported 00:17:35.139 Doorbell Buffer Config: Not Supported 00:17:35.139 Get LBA Status Capability: Not Supported 00:17:35.139 Command & Feature Lockdown Capability: Not Supported 00:17:35.139 Abort Command Limit: 1 00:17:35.139 Async Event Request Limit: 4 00:17:35.139 Number of Firmware Slots: N/A 00:17:35.139 Firmware Slot 1 Read-Only: N/A 00:17:35.139 Firm[2024-12-09 04:07:17.060754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.139 [2024-12-09 04:07:17.060762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.139 [2024-12-09 04:07:17.060765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.060769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340d40) on tqpair=0x12dc750 00:17:35.139 ware Activation Without Reset: N/A 00:17:35.139 Multiple Update Detection Support: N/A 00:17:35.139 Firmware Update Granularity: No Information Provided 00:17:35.139 Per-Namespace SMART Log: No 00:17:35.139 Asymmetric Namespace Access Log Page: Not Supported 00:17:35.139 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:35.139 Command Effects Log Page: Not Supported 00:17:35.139 Get Log Page Extended Data: Supported 00:17:35.139 Telemetry Log Pages: Not Supported 00:17:35.139 Persistent Event Log Pages: Not Supported 00:17:35.139 Supported Log Pages Log Page: May Support 00:17:35.139 Commands Supported & Effects Log Page: Not Supported 00:17:35.139 Feature Identifiers & Effects Log Page:May Support 00:17:35.139 NVMe-MI Commands & Effects Log Page: May Support 00:17:35.139 Data Area 4 for Telemetry Log: Not Supported 00:17:35.139 Error Log Page Entries Supported: 128 00:17:35.139 Keep Alive: Not Supported 00:17:35.139 00:17:35.139 NVM Command Set Attributes 00:17:35.139 ========================== 00:17:35.139 Submission Queue Entry Size 00:17:35.139 Max: 1 00:17:35.139 Min: 1 00:17:35.139 Completion Queue Entry Size 00:17:35.139 Max: 1 00:17:35.139 Min: 1 00:17:35.139 Number of Namespaces: 0 00:17:35.139 Compare Command: Not Supported 00:17:35.139 Write Uncorrectable Command: Not Supported 00:17:35.139 Dataset Management Command: Not Supported 00:17:35.139 Write Zeroes Command: Not Supported 00:17:35.139 Set Features Save Field: Not Supported 00:17:35.139 Reservations: Not Supported 00:17:35.139 Timestamp: Not Supported 00:17:35.139 Copy: Not Supported 00:17:35.139 Volatile Write Cache: Not Present 00:17:35.139 Atomic Write Unit (Normal): 1 00:17:35.139 Atomic Write Unit (PFail): 1 00:17:35.139 Atomic Compare & Write Unit: 1 00:17:35.139 Fused Compare & Write: Supported 00:17:35.139 Scatter-Gather List 00:17:35.139 SGL Command Set: Supported 00:17:35.139 SGL Keyed: Supported 00:17:35.139 SGL Bit Bucket Descriptor: Not Supported 00:17:35.139 SGL Metadata Pointer: Not Supported 00:17:35.139 Oversized SGL: Not Supported 00:17:35.139 SGL Metadata Address: Not Supported 00:17:35.139 SGL Offset: Supported 00:17:35.139 Transport SGL Data Block: Not Supported 00:17:35.139 Replay Protected Memory Block: Not Supported 00:17:35.139 00:17:35.139 Firmware Slot Information 00:17:35.139 ========================= 00:17:35.139 Active slot: 0 00:17:35.139 00:17:35.139 00:17:35.139 Error Log 00:17:35.139 ========= 00:17:35.139 00:17:35.139 Active Namespaces 00:17:35.139 ================= 00:17:35.139 Discovery Log Page 00:17:35.139 ================== 00:17:35.139 Generation Counter: 2 00:17:35.139 Number of Records: 2 00:17:35.139 Record Format: 0 00:17:35.139 00:17:35.139 Discovery Log Entry 0 00:17:35.139 ---------------------- 00:17:35.139 Transport Type: 3 (TCP) 00:17:35.139 Address Family: 1 (IPv4) 00:17:35.139 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:35.139 Entry Flags: 00:17:35.139 Duplicate Returned Information: 1 00:17:35.139 Explicit Persistent Connection Support for Discovery: 1 00:17:35.139 Transport Requirements: 00:17:35.139 Secure Channel: Not Required 00:17:35.139 Port ID: 0 (0x0000) 00:17:35.139 Controller ID: 65535 (0xffff) 00:17:35.139 Admin Max SQ Size: 128 00:17:35.139 Transport Service Identifier: 4420 00:17:35.139 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:35.139 Transport Address: 10.0.0.3 00:17:35.139 Discovery Log Entry 1 00:17:35.139 ---------------------- 00:17:35.139 Transport Type: 3 (TCP) 00:17:35.139 Address Family: 1 (IPv4) 00:17:35.139 Subsystem Type: 2 (NVM Subsystem) 00:17:35.139 Entry Flags: 00:17:35.139 Duplicate Returned Information: 0 00:17:35.139 Explicit Persistent Connection Support for Discovery: 0 00:17:35.139 Transport Requirements: 00:17:35.139 Secure Channel: Not Required 00:17:35.139 Port ID: 0 (0x0000) 00:17:35.139 Controller ID: 65535 (0xffff) 00:17:35.139 Admin Max SQ Size: 128 00:17:35.139 Transport Service Identifier: 4420 00:17:35.139 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:35.139 Transport Address: 10.0.0.3 [2024-12-09 04:07:17.060866] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:35.139 [2024-12-09 04:07:17.060881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340740) on tqpair=0x12dc750 00:17:35.139 [2024-12-09 04:07:17.060888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.139 [2024-12-09 04:07:17.060894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13408c0) on tqpair=0x12dc750 00:17:35.139 [2024-12-09 04:07:17.060898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.139 [2024-12-09 04:07:17.060904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340a40) on tqpair=0x12dc750 00:17:35.139 [2024-12-09 04:07:17.060908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.139 [2024-12-09 04:07:17.060913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.139 [2024-12-09 04:07:17.060918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.139 [2024-12-09 04:07:17.060928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.060933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.060937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.139 [2024-12-09 04:07:17.060945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.139 [2024-12-09 04:07:17.060968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.139 [2024-12-09 04:07:17.061016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.139 [2024-12-09 04:07:17.061023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.139 [2024-12-09 04:07:17.061027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.061031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.139 [2024-12-09 04:07:17.061039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.061044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.061047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.139 [2024-12-09 04:07:17.061055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.139 [2024-12-09 04:07:17.061076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.139 [2024-12-09 04:07:17.061135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.139 [2024-12-09 04:07:17.061142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.139 [2024-12-09 04:07:17.061146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.061150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.139 [2024-12-09 04:07:17.061160] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:35.139 [2024-12-09 04:07:17.061178] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:35.139 [2024-12-09 04:07:17.061192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.061197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.139 [2024-12-09 04:07:17.061200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.139 [2024-12-09 04:07:17.061208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.139 [2024-12-09 04:07:17.061228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.139 [2024-12-09 04:07:17.061278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.061285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.061288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.061303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.061319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.061336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.061377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.061383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.061387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.061401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.061417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.061433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.061480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.061487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.061491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.061505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.061520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.061537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.061582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.061590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.061594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.061609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.061653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.061672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.061717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.061724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.061728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.061743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.061760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.061777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.061819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.061825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.061829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.061844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.061860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.061877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.061924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.061930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.061934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.061949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.061973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.061980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.061996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.062039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.062046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.062050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.062054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.062064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.062069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.062073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.062080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.062096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.062139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.062146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.062150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.062154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.062164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.062169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.062172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.062180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.062196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.066268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.140 [2024-12-09 04:07:17.066276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.140 [2024-12-09 04:07:17.066280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.066284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.140 [2024-12-09 04:07:17.066299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.066304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.140 [2024-12-09 04:07:17.066308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12dc750) 00:17:35.140 [2024-12-09 04:07:17.066316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.140 [2024-12-09 04:07:17.066342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340bc0, cid 3, qid 0 00:17:35.140 [2024-12-09 04:07:17.066391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.141 [2024-12-09 04:07:17.066398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.141 [2024-12-09 04:07:17.066401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.141 [2024-12-09 04:07:17.066405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340bc0) on tqpair=0x12dc750 00:17:35.141 [2024-12-09 04:07:17.066414] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:17:35.404 00:17:35.404 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:35.404 [2024-12-09 04:07:17.112872] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:35.404 [2024-12-09 04:07:17.113087] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74682 ] 00:17:35.404 [2024-12-09 04:07:17.274233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:35.404 [2024-12-09 04:07:17.274304] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:35.404 [2024-12-09 04:07:17.274312] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:35.404 [2024-12-09 04:07:17.274327] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:35.404 [2024-12-09 04:07:17.274338] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:35.404 [2024-12-09 04:07:17.274657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:35.404 [2024-12-09 04:07:17.274711] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc8d750 0 00:17:35.404 [2024-12-09 04:07:17.288277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:35.404 [2024-12-09 04:07:17.288305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:35.404 [2024-12-09 04:07:17.288328] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:35.404 [2024-12-09 04:07:17.288331] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:35.404 [2024-12-09 04:07:17.288365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.404 [2024-12-09 04:07:17.288372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.404 [2024-12-09 04:07:17.288376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.404 [2024-12-09 04:07:17.288389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:35.404 [2024-12-09 04:07:17.288419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.404 [2024-12-09 04:07:17.296229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.404 [2024-12-09 04:07:17.296253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.404 [2024-12-09 04:07:17.296274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.405 [2024-12-09 04:07:17.296290] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:35.405 [2024-12-09 04:07:17.296310] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:35.405 [2024-12-09 04:07:17.296317] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:35.405 [2024-12-09 04:07:17.296336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.405 [2024-12-09 04:07:17.296355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.405 [2024-12-09 04:07:17.296385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.405 [2024-12-09 04:07:17.296434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.405 [2024-12-09 04:07:17.296441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.405 [2024-12-09 04:07:17.296445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.405 [2024-12-09 04:07:17.296455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:35.405 [2024-12-09 04:07:17.296463] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:35.405 [2024-12-09 04:07:17.296470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.405 [2024-12-09 04:07:17.296486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.405 [2024-12-09 04:07:17.296536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.405 [2024-12-09 04:07:17.296606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.405 [2024-12-09 04:07:17.296613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.405 [2024-12-09 04:07:17.296616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.405 [2024-12-09 04:07:17.296626] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:35.405 [2024-12-09 04:07:17.296635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:35.405 [2024-12-09 04:07:17.296642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.405 [2024-12-09 04:07:17.296658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.405 [2024-12-09 04:07:17.296675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.405 [2024-12-09 04:07:17.296716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.405 [2024-12-09 04:07:17.296723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.405 [2024-12-09 04:07:17.296727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.405 [2024-12-09 04:07:17.296737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:35.405 [2024-12-09 04:07:17.296747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.405 [2024-12-09 04:07:17.296762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.405 [2024-12-09 04:07:17.296779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.405 [2024-12-09 04:07:17.296825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.405 [2024-12-09 04:07:17.296833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.405 [2024-12-09 04:07:17.296836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.405 [2024-12-09 04:07:17.296846] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:35.405 [2024-12-09 04:07:17.296851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:35.405 [2024-12-09 04:07:17.296859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:35.405 [2024-12-09 04:07:17.296970] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:35.405 [2024-12-09 04:07:17.296976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:35.405 [2024-12-09 04:07:17.296985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.296993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.405 [2024-12-09 04:07:17.297001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.405 [2024-12-09 04:07:17.297021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.405 [2024-12-09 04:07:17.297062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.405 [2024-12-09 04:07:17.297069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.405 [2024-12-09 04:07:17.297073] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.297077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.405 [2024-12-09 04:07:17.297082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:35.405 [2024-12-09 04:07:17.297092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.297097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.297101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.405 [2024-12-09 04:07:17.297109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.405 [2024-12-09 04:07:17.297126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.405 [2024-12-09 04:07:17.297167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.405 [2024-12-09 04:07:17.297173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.405 [2024-12-09 04:07:17.297177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.297181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.405 [2024-12-09 04:07:17.297186] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:35.405 [2024-12-09 04:07:17.297192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:35.405 [2024-12-09 04:07:17.297199] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:35.405 [2024-12-09 04:07:17.297210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:35.405 [2024-12-09 04:07:17.297221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.405 [2024-12-09 04:07:17.297225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.405 [2024-12-09 04:07:17.297248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.405 [2024-12-09 04:07:17.297271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.405 [2024-12-09 04:07:17.297370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.405 [2024-12-09 04:07:17.297377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.405 [2024-12-09 04:07:17.297381] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297385] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8d750): datao=0, datal=4096, cccid=0 00:17:35.406 [2024-12-09 04:07:17.297390] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf1740) on tqpair(0xc8d750): expected_datao=0, payload_size=4096 00:17:35.406 [2024-12-09 04:07:17.297395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297403] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297408] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.406 [2024-12-09 04:07:17.297423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.406 [2024-12-09 04:07:17.297426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.406 [2024-12-09 04:07:17.297439] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:35.406 [2024-12-09 04:07:17.297445] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:35.406 [2024-12-09 04:07:17.297449] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:35.406 [2024-12-09 04:07:17.297454] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:35.406 [2024-12-09 04:07:17.297459] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:35.406 [2024-12-09 04:07:17.297465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.297474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.297482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.406 [2024-12-09 04:07:17.297498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.406 [2024-12-09 04:07:17.297517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.406 [2024-12-09 04:07:17.297562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.406 [2024-12-09 04:07:17.297569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.406 [2024-12-09 04:07:17.297573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.406 [2024-12-09 04:07:17.297590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8d750) 00:17:35.406 [2024-12-09 04:07:17.297606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.406 [2024-12-09 04:07:17.297612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc8d750) 00:17:35.406 [2024-12-09 04:07:17.297654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.406 [2024-12-09 04:07:17.297668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc8d750) 00:17:35.406 [2024-12-09 04:07:17.297682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.406 [2024-12-09 04:07:17.297688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.406 [2024-12-09 04:07:17.297702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.406 [2024-12-09 04:07:17.297709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.297718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.297726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8d750) 00:17:35.406 [2024-12-09 04:07:17.297737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.406 [2024-12-09 04:07:17.297759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1740, cid 0, qid 0 00:17:35.406 [2024-12-09 04:07:17.297767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf18c0, cid 1, qid 0 00:17:35.406 [2024-12-09 04:07:17.297772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1a40, cid 2, qid 0 00:17:35.406 [2024-12-09 04:07:17.297777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.406 [2024-12-09 04:07:17.297782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1d40, cid 4, qid 0 00:17:35.406 [2024-12-09 04:07:17.297873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.406 [2024-12-09 04:07:17.297880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.406 [2024-12-09 04:07:17.297884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1d40) on tqpair=0xc8d750 00:17:35.406 [2024-12-09 04:07:17.297893] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:35.406 [2024-12-09 04:07:17.297905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.297915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.297922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.297929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.297937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8d750) 00:17:35.406 [2024-12-09 04:07:17.297945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.406 [2024-12-09 04:07:17.297964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1d40, cid 4, qid 0 00:17:35.406 [2024-12-09 04:07:17.298029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.406 [2024-12-09 04:07:17.298036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.406 [2024-12-09 04:07:17.298040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.298044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1d40) on tqpair=0xc8d750 00:17:35.406 [2024-12-09 04:07:17.298109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.298121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:35.406 [2024-12-09 04:07:17.298131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.298135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8d750) 00:17:35.406 [2024-12-09 04:07:17.298143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.406 [2024-12-09 04:07:17.298162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1d40, cid 4, qid 0 00:17:35.406 [2024-12-09 04:07:17.298252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.406 [2024-12-09 04:07:17.298262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.406 [2024-12-09 04:07:17.298266] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.298270] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8d750): datao=0, datal=4096, cccid=4 00:17:35.406 [2024-12-09 04:07:17.298275] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf1d40) on tqpair(0xc8d750): expected_datao=0, payload_size=4096 00:17:35.406 [2024-12-09 04:07:17.298279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.298287] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.298291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.406 [2024-12-09 04:07:17.298299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.406 [2024-12-09 04:07:17.298306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.407 [2024-12-09 04:07:17.298309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1d40) on tqpair=0xc8d750 00:17:35.407 [2024-12-09 04:07:17.298333] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:35.407 [2024-12-09 04:07:17.298348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8d750) 00:17:35.407 [2024-12-09 04:07:17.298383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.407 [2024-12-09 04:07:17.298405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1d40, cid 4, qid 0 00:17:35.407 [2024-12-09 04:07:17.298480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.407 [2024-12-09 04:07:17.298487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.407 [2024-12-09 04:07:17.298491] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298495] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8d750): datao=0, datal=4096, cccid=4 00:17:35.407 [2024-12-09 04:07:17.298500] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf1d40) on tqpair(0xc8d750): expected_datao=0, payload_size=4096 00:17:35.407 [2024-12-09 04:07:17.298505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298512] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298516] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.407 [2024-12-09 04:07:17.298530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.407 [2024-12-09 04:07:17.298534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1d40) on tqpair=0xc8d750 00:17:35.407 [2024-12-09 04:07:17.298555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8d750) 00:17:35.407 [2024-12-09 04:07:17.298602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.407 [2024-12-09 04:07:17.298623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1d40, cid 4, qid 0 00:17:35.407 [2024-12-09 04:07:17.298677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.407 [2024-12-09 04:07:17.298684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.407 [2024-12-09 04:07:17.298688] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298692] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8d750): datao=0, datal=4096, cccid=4 00:17:35.407 [2024-12-09 04:07:17.298697] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf1d40) on tqpair(0xc8d750): expected_datao=0, payload_size=4096 00:17:35.407 [2024-12-09 04:07:17.298701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298708] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298712] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.407 [2024-12-09 04:07:17.298726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.407 [2024-12-09 04:07:17.298730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1d40) on tqpair=0xc8d750 00:17:35.407 [2024-12-09 04:07:17.298743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298788] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:35.407 [2024-12-09 04:07:17.298793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:35.407 [2024-12-09 04:07:17.298798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:35.407 [2024-12-09 04:07:17.298814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8d750) 00:17:35.407 [2024-12-09 04:07:17.298826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.407 [2024-12-09 04:07:17.298833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8d750) 00:17:35.407 [2024-12-09 04:07:17.298848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.407 [2024-12-09 04:07:17.298874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1d40, cid 4, qid 0 00:17:35.407 [2024-12-09 04:07:17.298882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1ec0, cid 5, qid 0 00:17:35.407 [2024-12-09 04:07:17.298943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.407 [2024-12-09 04:07:17.298950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.407 [2024-12-09 04:07:17.298954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1d40) on tqpair=0xc8d750 00:17:35.407 [2024-12-09 04:07:17.298965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.407 [2024-12-09 04:07:17.298971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.407 [2024-12-09 04:07:17.298975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1ec0) on tqpair=0xc8d750 00:17:35.407 [2024-12-09 04:07:17.298989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.298993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8d750) 00:17:35.407 [2024-12-09 04:07:17.299001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.407 [2024-12-09 04:07:17.299018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1ec0, cid 5, qid 0 00:17:35.407 [2024-12-09 04:07:17.299067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.407 [2024-12-09 04:07:17.299074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.407 [2024-12-09 04:07:17.299078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.299082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1ec0) on tqpair=0xc8d750 00:17:35.407 [2024-12-09 04:07:17.299092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.299097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8d750) 00:17:35.407 [2024-12-09 04:07:17.299104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.407 [2024-12-09 04:07:17.299120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1ec0, cid 5, qid 0 00:17:35.407 [2024-12-09 04:07:17.299211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.407 [2024-12-09 04:07:17.299223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.407 [2024-12-09 04:07:17.299227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.299231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1ec0) on tqpair=0xc8d750 00:17:35.407 [2024-12-09 04:07:17.299242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.407 [2024-12-09 04:07:17.299247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8d750) 00:17:35.408 [2024-12-09 04:07:17.299255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.408 [2024-12-09 04:07:17.299277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1ec0, cid 5, qid 0 00:17:35.408 [2024-12-09 04:07:17.299322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.408 [2024-12-09 04:07:17.299329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.408 [2024-12-09 04:07:17.299332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1ec0) on tqpair=0xc8d750 00:17:35.408 [2024-12-09 04:07:17.299357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8d750) 00:17:35.408 [2024-12-09 04:07:17.299386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.408 [2024-12-09 04:07:17.299394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8d750) 00:17:35.408 [2024-12-09 04:07:17.299405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.408 [2024-12-09 04:07:17.299412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc8d750) 00:17:35.408 [2024-12-09 04:07:17.299423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.408 [2024-12-09 04:07:17.299435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc8d750) 00:17:35.408 [2024-12-09 04:07:17.299447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.408 [2024-12-09 04:07:17.299467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1ec0, cid 5, qid 0 00:17:35.408 [2024-12-09 04:07:17.299474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1d40, cid 4, qid 0 00:17:35.408 [2024-12-09 04:07:17.299479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf2040, cid 6, qid 0 00:17:35.408 [2024-12-09 04:07:17.299484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf21c0, cid 7, qid 0 00:17:35.408 [2024-12-09 04:07:17.299626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.408 [2024-12-09 04:07:17.299633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.408 [2024-12-09 04:07:17.299637] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299641] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8d750): datao=0, datal=8192, cccid=5 00:17:35.408 [2024-12-09 04:07:17.299646] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf1ec0) on tqpair(0xc8d750): expected_datao=0, payload_size=8192 00:17:35.408 [2024-12-09 04:07:17.299650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299667] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299672] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.408 [2024-12-09 04:07:17.299684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.408 [2024-12-09 04:07:17.299688] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299692] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8d750): datao=0, datal=512, cccid=4 00:17:35.408 [2024-12-09 04:07:17.299697] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf1d40) on tqpair(0xc8d750): expected_datao=0, payload_size=512 00:17:35.408 [2024-12-09 04:07:17.299701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299708] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299712] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.408 [2024-12-09 04:07:17.299724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.408 [2024-12-09 04:07:17.299728] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299731] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8d750): datao=0, datal=512, cccid=6 00:17:35.408 [2024-12-09 04:07:17.299736] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf2040) on tqpair(0xc8d750): expected_datao=0, payload_size=512 00:17:35.408 [2024-12-09 04:07:17.299740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299747] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299750] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:35.408 [2024-12-09 04:07:17.299777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:35.408 [2024-12-09 04:07:17.299781] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299785] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8d750): datao=0, datal=4096, cccid=7 00:17:35.408 [2024-12-09 04:07:17.299789] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf21c0) on tqpair(0xc8d750): expected_datao=0, payload_size=4096 00:17:35.408 [2024-12-09 04:07:17.299793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299800] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299804] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.408 [2024-12-09 04:07:17.299818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.408 [2024-12-09 04:07:17.299821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1ec0) on tqpair=0xc8d750 00:17:35.408 [2024-12-09 04:07:17.299841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.408 [2024-12-09 04:07:17.299848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.408 [2024-12-09 04:07:17.299852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1d40) on tqpair=0xc8d750 00:17:35.408 [2024-12-09 04:07:17.299869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.408 [2024-12-09 04:07:17.299876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.408 [2024-12-09 04:07:17.299880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.408 [2024-12-09 04:07:17.299884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf2040) on tqpair=0xc8d750 00:17:35.408 [2024-12-09 04:07:17.299891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.408 [2024-12-09 04:07:17.299897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.408 ===================================================== 00:17:35.408 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:35.408 ===================================================== 00:17:35.408 Controller Capabilities/Features 00:17:35.408 ================================ 00:17:35.408 Vendor ID: 8086 00:17:35.408 Subsystem Vendor ID: 8086 00:17:35.408 Serial Number: SPDK00000000000001 00:17:35.408 Model Number: SPDK bdev Controller 00:17:35.408 Firmware Version: 25.01 00:17:35.408 Recommended Arb Burst: 6 00:17:35.408 IEEE OUI Identifier: e4 d2 5c 00:17:35.408 Multi-path I/O 00:17:35.408 May have multiple subsystem ports: Yes 00:17:35.408 May have multiple controllers: Yes 00:17:35.408 Associated with SR-IOV VF: No 00:17:35.408 Max Data Transfer Size: 131072 00:17:35.408 Max Number of Namespaces: 32 00:17:35.408 Max Number of I/O Queues: 127 00:17:35.408 NVMe Specification Version (VS): 1.3 00:17:35.408 NVMe Specification Version (Identify): 1.3 00:17:35.408 Maximum Queue Entries: 128 00:17:35.408 Contiguous Queues Required: Yes 00:17:35.408 Arbitration Mechanisms Supported 00:17:35.408 Weighted Round Robin: Not Supported 00:17:35.408 Vendor Specific: Not Supported 00:17:35.408 Reset Timeout: 15000 ms 00:17:35.408 Doorbell Stride: 4 bytes 00:17:35.408 NVM Subsystem Reset: Not Supported 00:17:35.408 Command Sets Supported 00:17:35.408 NVM Command Set: Supported 00:17:35.408 Boot Partition: Not Supported 00:17:35.408 Memory Page Size Minimum: 4096 bytes 00:17:35.408 Memory Page Size Maximum: 4096 bytes 00:17:35.408 Persistent Memory Region: Not Supported 00:17:35.408 Optional Asynchronous Events Supported 00:17:35.409 Namespace Attribute Notices: Supported 00:17:35.409 Firmware Activation Notices: Not Supported 00:17:35.409 ANA Change Notices: Not Supported 00:17:35.409 PLE Aggregate Log Change Notices: Not Supported 00:17:35.409 LBA Status Info Alert Notices: Not Supported 00:17:35.409 EGE Aggregate Log Change Notices: Not Supported 00:17:35.409 Normal NVM Subsystem Shutdown event: Not Supported 00:17:35.409 Zone Descriptor Change Notices: Not Supported 00:17:35.409 Discovery Log Change Notices: Not Supported 00:17:35.409 Controller Attributes 00:17:35.409 128-bit Host Identifier: Supported 00:17:35.409 Non-Operational Permissive Mode: Not Supported 00:17:35.409 NVM Sets: Not Supported 00:17:35.409 Read Recovery Levels: Not Supported 00:17:35.409 Endurance Groups: Not Supported 00:17:35.409 Predictable Latency Mode: Not Supported 00:17:35.409 Traffic Based Keep ALive: Not Supported 00:17:35.409 Namespace Granularity: Not Supported 00:17:35.409 SQ Associations: Not Supported 00:17:35.409 UUID List: Not Supported 00:17:35.409 Multi-Domain Subsystem: Not Supported 00:17:35.409 Fixed Capacity Management: Not Supported 00:17:35.409 Variable Capacity Management: Not Supported 00:17:35.409 Delete Endurance Group: Not Supported 00:17:35.409 Delete NVM Set: Not Supported 00:17:35.409 Extended LBA Formats Supported: Not Supported 00:17:35.409 Flexible Data Placement Supported: Not Supported 00:17:35.409 00:17:35.409 Controller Memory Buffer Support 00:17:35.409 ================================ 00:17:35.409 Supported: No 00:17:35.409 00:17:35.409 Persistent Memory Region Support 00:17:35.409 ================================ 00:17:35.409 Supported: No 00:17:35.409 00:17:35.409 Admin Command Set Attributes 00:17:35.409 ============================ 00:17:35.409 Security Send/Receive: Not Supported 00:17:35.409 Format NVM: Not Supported 00:17:35.409 Firmware Activate/Download: Not Supported 00:17:35.409 Namespace Management: Not Supported 00:17:35.409 Device Self-Test: Not Supported 00:17:35.409 Directives: Not Supported 00:17:35.409 NVMe-MI: Not Supported 00:17:35.409 Virtualization Management: Not Supported 00:17:35.409 Doorbell Buffer Config: Not Supported 00:17:35.409 Get LBA Status Capability: Not Supported 00:17:35.409 Command & Feature Lockdown Capability: Not Supported 00:17:35.409 Abort Command Limit: 4 00:17:35.409 Async Event Request Limit: 4 00:17:35.409 Number of Firmware Slots: N/A 00:17:35.409 Firmware Slot 1 Read-Only: N/A 00:17:35.409 Firmware Activation Without Reset: N/A 00:17:35.409 Multiple Update Detection Support: N/A 00:17:35.409 Firmware Update Granularity: No Information Provided 00:17:35.409 Per-Namespace SMART Log: No 00:17:35.409 Asymmetric Namespace Access Log Page: Not Supported 00:17:35.409 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:35.409 Command Effects Log Page: Supported 00:17:35.409 Get Log Page Extended Data: Supported 00:17:35.409 Telemetry Log Pages: Not Supported 00:17:35.409 Persistent Event Log Pages: Not Supported 00:17:35.409 Supported Log Pages Log Page: May Support 00:17:35.409 Commands Supported & Effects Log Page: Not Supported 00:17:35.409 Feature Identifiers & Effects Log Page:May Support 00:17:35.409 NVMe-MI Commands & Effects Log Page: May Support 00:17:35.409 Data Area 4 for Telemetry Log: Not Supported 00:17:35.409 Error Log Page Entries Supported: 128 00:17:35.409 Keep Alive: Supported 00:17:35.409 Keep Alive Granularity: 10000 ms 00:17:35.409 00:17:35.409 NVM Command Set Attributes 00:17:35.409 ========================== 00:17:35.409 Submission Queue Entry Size 00:17:35.409 Max: 64 00:17:35.409 Min: 64 00:17:35.409 Completion Queue Entry Size 00:17:35.409 Max: 16 00:17:35.409 Min: 16 00:17:35.409 Number of Namespaces: 32 00:17:35.409 Compare Command: Supported 00:17:35.409 Write Uncorrectable Command: Not Supported 00:17:35.409 Dataset Management Command: Supported 00:17:35.409 Write Zeroes Command: Supported 00:17:35.409 Set Features Save Field: Not Supported 00:17:35.409 Reservations: Supported 00:17:35.409 Timestamp: Not Supported 00:17:35.409 Copy: Supported 00:17:35.409 Volatile Write Cache: Present 00:17:35.409 Atomic Write Unit (Normal): 1 00:17:35.409 Atomic Write Unit (PFail): 1 00:17:35.409 Atomic Compare & Write Unit: 1 00:17:35.409 Fused Compare & Write: Supported 00:17:35.409 Scatter-Gather List 00:17:35.409 SGL Command Set: Supported 00:17:35.409 SGL Keyed: Supported 00:17:35.409 SGL Bit Bucket Descriptor: Not Supported 00:17:35.409 SGL Metadata Pointer: Not Supported 00:17:35.409 Oversized SGL: Not Supported 00:17:35.409 SGL Metadata Address: Not Supported 00:17:35.409 SGL Offset: Supported 00:17:35.409 Transport SGL Data Block: Not Supported 00:17:35.409 Replay Protected Memory Block: Not Supported 00:17:35.409 00:17:35.409 Firmware Slot Information 00:17:35.409 ========================= 00:17:35.409 Active slot: 1 00:17:35.409 Slot 1 Firmware Revision: 25.01 00:17:35.409 00:17:35.409 00:17:35.409 Commands Supported and Effects 00:17:35.409 ============================== 00:17:35.409 Admin Commands 00:17:35.409 -------------- 00:17:35.409 Get Log Page (02h): Supported 00:17:35.409 Identify (06h): Supported 00:17:35.409 Abort (08h): Supported 00:17:35.409 Set Features (09h): Supported 00:17:35.409 Get Features (0Ah): Supported 00:17:35.409 Asynchronous Event Request (0Ch): Supported 00:17:35.409 Keep Alive (18h): Supported 00:17:35.409 I/O Commands 00:17:35.409 ------------ 00:17:35.409 Flush (00h): Supported LBA-Change 00:17:35.409 Write (01h): Supported LBA-Change 00:17:35.409 Read (02h): Supported 00:17:35.409 Compare (05h): Supported 00:17:35.409 Write Zeroes (08h): Supported LBA-Change 00:17:35.409 Dataset Management (09h): Supported LBA-Change 00:17:35.409 Copy (19h): Supported LBA-Change 00:17:35.409 00:17:35.409 Error Log 00:17:35.409 ========= 00:17:35.409 00:17:35.409 Arbitration 00:17:35.409 =========== 00:17:35.409 Arbitration Burst: 1 00:17:35.409 00:17:35.409 Power Management 00:17:35.409 ================ 00:17:35.409 Number of Power States: 1 00:17:35.409 Current Power State: Power State #0 00:17:35.409 Power State #0: 00:17:35.409 Max Power: 0.00 W 00:17:35.409 Non-Operational State: Operational 00:17:35.409 Entry Latency: Not Reported 00:17:35.409 Exit Latency: Not Reported 00:17:35.409 Relative Read Throughput: 0 00:17:35.409 Relative Read Latency: 0 00:17:35.409 Relative Write Throughput: 0 00:17:35.409 Relative Write Latency: 0 00:17:35.409 Idle Power: Not Reported 00:17:35.409 Active Power: Not Reported 00:17:35.409 Non-Operational Permissive Mode: Not Supported 00:17:35.409 00:17:35.409 Health Information 00:17:35.409 ================== 00:17:35.409 Critical Warnings: 00:17:35.409 Available Spare Space: OK 00:17:35.409 Temperature: OK 00:17:35.409 Device Reliability: OK 00:17:35.410 Read Only: No 00:17:35.410 Volatile Memory Backup: OK 00:17:35.410 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:35.410 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:35.410 Available Spare: 0% 00:17:35.410 Available Spare Threshold: 0% 00:17:35.410 Life Percentage Used:[2024-12-09 04:07:17.299901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.299905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf21c0) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.300015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.300023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc8d750) 00:17:35.410 [2024-12-09 04:07:17.300031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-12-09 04:07:17.300053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf21c0, cid 7, qid 0 00:17:35.410 [2024-12-09 04:07:17.300104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.410 [2024-12-09 04:07:17.300111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.410 [2024-12-09 04:07:17.300115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.300119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf21c0) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.300159] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:35.410 [2024-12-09 04:07:17.300171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1740) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.300178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.410 [2024-12-09 04:07:17.300184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf18c0) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.300189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.410 [2024-12-09 04:07:17.300194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1a40) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.304268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.410 [2024-12-09 04:07:17.304278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.304283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.410 [2024-12-09 04:07:17.304294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.410 [2024-12-09 04:07:17.304312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-12-09 04:07:17.304340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.410 [2024-12-09 04:07:17.304390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.410 [2024-12-09 04:07:17.304413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.410 [2024-12-09 04:07:17.304417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.304430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.410 [2024-12-09 04:07:17.304446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-12-09 04:07:17.304468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.410 [2024-12-09 04:07:17.304535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.410 [2024-12-09 04:07:17.304542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.410 [2024-12-09 04:07:17.304546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.304556] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:35.410 [2024-12-09 04:07:17.304562] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:35.410 [2024-12-09 04:07:17.304587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.410 [2024-12-09 04:07:17.304603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-12-09 04:07:17.304620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.410 [2024-12-09 04:07:17.304666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.410 [2024-12-09 04:07:17.304673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.410 [2024-12-09 04:07:17.304677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.304692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.410 [2024-12-09 04:07:17.304708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-12-09 04:07:17.304724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.410 [2024-12-09 04:07:17.304771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.410 [2024-12-09 04:07:17.304778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.410 [2024-12-09 04:07:17.304781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.410 [2024-12-09 04:07:17.304796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.410 [2024-12-09 04:07:17.304804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.410 [2024-12-09 04:07:17.304811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-12-09 04:07:17.304828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.304871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.304878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.304882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.304886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.304896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.304901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.304904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.304911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.304928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.304972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.304978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.304982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.304986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.304996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.305084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.305099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.305181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.305196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.305307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.305322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.305409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.305423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.305508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.305522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.305605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.305646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.305744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.305759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.411 [2024-12-09 04:07:17.305848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.411 [2024-12-09 04:07:17.305862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.411 [2024-12-09 04:07:17.305871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.411 [2024-12-09 04:07:17.305879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-12-09 04:07:17.305896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.411 [2024-12-09 04:07:17.305944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.411 [2024-12-09 04:07:17.305951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.305954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.305959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.305969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.305989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.305993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.412 [2024-12-09 04:07:17.306868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.412 [2024-12-09 04:07:17.306874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.412 [2024-12-09 04:07:17.306878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.412 [2024-12-09 04:07:17.306892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.412 [2024-12-09 04:07:17.306901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.412 [2024-12-09 04:07:17.306908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-12-09 04:07:17.306924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.306968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.306974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.306978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.306982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.306992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.306997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.413 [2024-12-09 04:07:17.307934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.413 [2024-12-09 04:07:17.307941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.413 [2024-12-09 04:07:17.307945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.413 [2024-12-09 04:07:17.307959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.413 [2024-12-09 04:07:17.307967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.413 [2024-12-09 04:07:17.307974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-12-09 04:07:17.307990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.414 [2024-12-09 04:07:17.308034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.414 [2024-12-09 04:07:17.308040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.414 [2024-12-09 04:07:17.308044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.308048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.414 [2024-12-09 04:07:17.308058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.308063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.308067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.414 [2024-12-09 04:07:17.308074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-12-09 04:07:17.308090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.414 [2024-12-09 04:07:17.308133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.414 [2024-12-09 04:07:17.308140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.414 [2024-12-09 04:07:17.308144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.308148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.414 [2024-12-09 04:07:17.308158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.308163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.308167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.414 [2024-12-09 04:07:17.308174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-12-09 04:07:17.308206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.414 [2024-12-09 04:07:17.312240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.414 [2024-12-09 04:07:17.312253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.414 [2024-12-09 04:07:17.312258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.312262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.414 [2024-12-09 04:07:17.312277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.312282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.312286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8d750) 00:17:35.414 [2024-12-09 04:07:17.312295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-12-09 04:07:17.312320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf1bc0, cid 3, qid 0 00:17:35.414 [2024-12-09 04:07:17.312366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:35.414 [2024-12-09 04:07:17.312373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:35.414 [2024-12-09 04:07:17.312377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:35.414 [2024-12-09 04:07:17.312381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf1bc0) on tqpair=0xc8d750 00:17:35.414 [2024-12-09 04:07:17.312389] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:35.414 0% 00:17:35.414 Data Units Read: 0 00:17:35.414 Data Units Written: 0 00:17:35.414 Host Read Commands: 0 00:17:35.414 Host Write Commands: 0 00:17:35.414 Controller Busy Time: 0 minutes 00:17:35.414 Power Cycles: 0 00:17:35.414 Power On Hours: 0 hours 00:17:35.414 Unsafe Shutdowns: 0 00:17:35.414 Unrecoverable Media Errors: 0 00:17:35.414 Lifetime Error Log Entries: 0 00:17:35.414 Warning Temperature Time: 0 minutes 00:17:35.414 Critical Temperature Time: 0 minutes 00:17:35.414 00:17:35.414 Number of Queues 00:17:35.414 ================ 00:17:35.414 Number of I/O Submission Queues: 127 00:17:35.414 Number of I/O Completion Queues: 127 00:17:35.414 00:17:35.414 Active Namespaces 00:17:35.414 ================= 00:17:35.414 Namespace ID:1 00:17:35.414 Error Recovery Timeout: Unlimited 00:17:35.414 Command Set Identifier: NVM (00h) 00:17:35.414 Deallocate: Supported 00:17:35.414 Deallocated/Unwritten Error: Not Supported 00:17:35.414 Deallocated Read Value: Unknown 00:17:35.414 Deallocate in Write Zeroes: Not Supported 00:17:35.414 Deallocated Guard Field: 0xFFFF 00:17:35.414 Flush: Supported 00:17:35.414 Reservation: Supported 00:17:35.414 Namespace Sharing Capabilities: Multiple Controllers 00:17:35.414 Size (in LBAs): 131072 (0GiB) 00:17:35.414 Capacity (in LBAs): 131072 (0GiB) 00:17:35.414 Utilization (in LBAs): 131072 (0GiB) 00:17:35.414 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:35.414 EUI64: ABCDEF0123456789 00:17:35.414 UUID: 3bc98fb1-3b30-45f1-a467-918e919f5035 00:17:35.414 Thin Provisioning: Not Supported 00:17:35.414 Per-NS Atomic Units: Yes 00:17:35.414 Atomic Boundary Size (Normal): 0 00:17:35.414 Atomic Boundary Size (PFail): 0 00:17:35.414 Atomic Boundary Offset: 0 00:17:35.414 Maximum Single Source Range Length: 65535 00:17:35.414 Maximum Copy Length: 65535 00:17:35.414 Maximum Source Range Count: 1 00:17:35.414 NGUID/EUI64 Never Reused: No 00:17:35.414 Namespace Write Protected: No 00:17:35.414 Number of LBA Formats: 1 00:17:35.414 Current LBA Format: LBA Format #00 00:17:35.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:35.414 00:17:35.414 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.673 rmmod nvme_tcp 00:17:35.673 rmmod nvme_fabrics 00:17:35.673 rmmod nvme_keyring 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74652 ']' 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74652 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74652 ']' 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74652 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74652 00:17:35.673 killing process with pid 74652 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74652' 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74652 00:17:35.673 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74652 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:35.932 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.933 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:35.933 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:35.933 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:35.933 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:36.191 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:36.192 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:36.192 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:36.192 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.192 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.192 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:36.192 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.192 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.192 04:07:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.192 04:07:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:36.192 00:17:36.192 real 0m2.434s 00:17:36.192 user 0m5.265s 00:17:36.192 sys 0m0.788s 00:17:36.192 04:07:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.192 04:07:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.192 ************************************ 00:17:36.192 END TEST nvmf_identify 00:17:36.192 ************************************ 00:17:36.192 04:07:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:36.192 04:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.192 04:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.192 04:07:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.192 ************************************ 00:17:36.192 START TEST nvmf_perf 00:17:36.192 ************************************ 00:17:36.192 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:36.192 * Looking for test storage... 00:17:36.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:36.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.452 --rc genhtml_branch_coverage=1 00:17:36.452 --rc genhtml_function_coverage=1 00:17:36.452 --rc genhtml_legend=1 00:17:36.452 --rc geninfo_all_blocks=1 00:17:36.452 --rc geninfo_unexecuted_blocks=1 00:17:36.452 00:17:36.452 ' 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:36.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.452 --rc genhtml_branch_coverage=1 00:17:36.452 --rc genhtml_function_coverage=1 00:17:36.452 --rc genhtml_legend=1 00:17:36.452 --rc geninfo_all_blocks=1 00:17:36.452 --rc geninfo_unexecuted_blocks=1 00:17:36.452 00:17:36.452 ' 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:36.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.452 --rc genhtml_branch_coverage=1 00:17:36.452 --rc genhtml_function_coverage=1 00:17:36.452 --rc genhtml_legend=1 00:17:36.452 --rc geninfo_all_blocks=1 00:17:36.452 --rc geninfo_unexecuted_blocks=1 00:17:36.452 00:17:36.452 ' 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:36.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.452 --rc genhtml_branch_coverage=1 00:17:36.452 --rc genhtml_function_coverage=1 00:17:36.452 --rc genhtml_legend=1 00:17:36.452 --rc geninfo_all_blocks=1 00:17:36.452 --rc geninfo_unexecuted_blocks=1 00:17:36.452 00:17:36.452 ' 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.452 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:36.453 Cannot find device "nvmf_init_br" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:36.453 Cannot find device "nvmf_init_br2" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:36.453 Cannot find device "nvmf_tgt_br" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.453 Cannot find device "nvmf_tgt_br2" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:36.453 Cannot find device "nvmf_init_br" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:36.453 Cannot find device "nvmf_init_br2" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:36.453 Cannot find device "nvmf_tgt_br" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:36.453 Cannot find device "nvmf_tgt_br2" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:36.453 Cannot find device "nvmf_br" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:36.453 Cannot find device "nvmf_init_if" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:36.453 Cannot find device "nvmf_init_if2" 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.453 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:36.712 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:36.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:36.713 00:17:36.713 --- 10.0.0.3 ping statistics --- 00:17:36.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.713 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:36.713 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:36.713 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:36.713 00:17:36.713 --- 10.0.0.4 ping statistics --- 00:17:36.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.713 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:36.713 00:17:36.713 --- 10.0.0.1 ping statistics --- 00:17:36.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.713 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:36.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:17:36.713 00:17:36.713 --- 10.0.0.2 ping statistics --- 00:17:36.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.713 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.713 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74904 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74904 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74904 ']' 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.971 04:07:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:36.971 [2024-12-09 04:07:18.742255] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:36.971 [2024-12-09 04:07:18.742854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.971 [2024-12-09 04:07:18.883307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.230 [2024-12-09 04:07:18.949779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.230 [2024-12-09 04:07:18.950149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.230 [2024-12-09 04:07:18.950339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.230 [2024-12-09 04:07:18.950436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.231 [2024-12-09 04:07:18.950509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.231 [2024-12-09 04:07:18.952152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.231 [2024-12-09 04:07:18.952315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.231 [2024-12-09 04:07:18.952938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.231 [2024-12-09 04:07:18.952985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.231 [2024-12-09 04:07:19.032472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:37.231 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.231 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:37.231 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.231 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.231 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:37.231 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.231 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:37.231 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:37.795 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:37.795 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:38.052 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:38.052 04:07:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:38.618 04:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:38.618 04:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:38.618 04:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:38.618 04:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:38.618 04:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:38.876 [2024-12-09 04:07:20.575880] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.876 04:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:39.195 04:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:39.195 04:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.195 04:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:39.195 04:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:39.453 04:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:39.710 [2024-12-09 04:07:21.557732] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:39.710 04:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:39.967 04:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:39.967 04:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:39.967 04:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:39.967 04:07:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:41.341 Initializing NVMe Controllers 00:17:41.341 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:41.341 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:41.341 Initialization complete. Launching workers. 00:17:41.341 ======================================================== 00:17:41.341 Latency(us) 00:17:41.341 Device Information : IOPS MiB/s Average min max 00:17:41.341 PCIE (0000:00:10.0) NSID 1 from core 0: 20220.46 78.99 1582.48 404.07 8337.39 00:17:41.341 ======================================================== 00:17:41.341 Total : 20220.46 78.99 1582.48 404.07 8337.39 00:17:41.341 00:17:41.341 04:07:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:42.800 Initializing NVMe Controllers 00:17:42.800 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:42.800 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:42.800 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:42.800 Initialization complete. Launching workers. 00:17:42.800 ======================================================== 00:17:42.800 Latency(us) 00:17:42.800 Device Information : IOPS MiB/s Average min max 00:17:42.800 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3804.98 14.86 260.36 97.31 7095.51 00:17:42.800 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.48 6066.60 12070.99 00:17:42.800 ======================================================== 00:17:42.800 Total : 3928.98 15.35 508.18 97.31 12070.99 00:17:42.800 00:17:42.800 04:07:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:43.730 Initializing NVMe Controllers 00:17:43.730 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:43.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:43.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:43.730 Initialization complete. Launching workers. 00:17:43.730 ======================================================== 00:17:43.730 Latency(us) 00:17:43.730 Device Information : IOPS MiB/s Average min max 00:17:43.730 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9058.06 35.38 3533.13 507.57 10544.82 00:17:43.730 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3973.92 15.52 8096.05 5400.72 12626.33 00:17:43.730 ======================================================== 00:17:43.730 Total : 13031.98 50.91 4924.53 507.57 12626.33 00:17:43.730 00:17:43.987 04:07:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:43.987 04:07:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:46.515 Initializing NVMe Controllers 00:17:46.515 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.515 Controller IO queue size 128, less than required. 00:17:46.515 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:46.515 Controller IO queue size 128, less than required. 00:17:46.515 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:46.515 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.515 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:46.515 Initialization complete. Launching workers. 00:17:46.515 ======================================================== 00:17:46.515 Latency(us) 00:17:46.515 Device Information : IOPS MiB/s Average min max 00:17:46.515 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1546.21 386.55 83697.95 49219.97 126758.49 00:17:46.515 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 653.74 163.44 201453.61 45202.35 321037.24 00:17:46.515 ======================================================== 00:17:46.515 Total : 2199.96 549.99 118690.49 45202.35 321037.24 00:17:46.515 00:17:46.515 04:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:46.773 Initializing NVMe Controllers 00:17:46.773 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.773 Controller IO queue size 128, less than required. 00:17:46.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:46.773 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:46.773 Controller IO queue size 128, less than required. 00:17:46.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:46.773 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:46.773 WARNING: Some requested NVMe devices were skipped 00:17:46.773 No valid NVMe controllers or AIO or URING devices found 00:17:46.773 04:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:49.307 Initializing NVMe Controllers 00:17:49.307 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.307 Controller IO queue size 128, less than required. 00:17:49.307 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.307 Controller IO queue size 128, less than required. 00:17:49.307 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.307 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:49.307 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:49.307 Initialization complete. Launching workers. 00:17:49.307 00:17:49.307 ==================== 00:17:49.307 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:49.307 TCP transport: 00:17:49.307 polls: 8066 00:17:49.307 idle_polls: 5137 00:17:49.307 sock_completions: 2929 00:17:49.307 nvme_completions: 5529 00:17:49.307 submitted_requests: 8360 00:17:49.307 queued_requests: 1 00:17:49.307 00:17:49.307 ==================== 00:17:49.307 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:49.307 TCP transport: 00:17:49.307 polls: 8175 00:17:49.307 idle_polls: 4891 00:17:49.307 sock_completions: 3284 00:17:49.307 nvme_completions: 6013 00:17:49.307 submitted_requests: 9090 00:17:49.307 queued_requests: 1 00:17:49.307 ======================================================== 00:17:49.307 Latency(us) 00:17:49.307 Device Information : IOPS MiB/s Average min max 00:17:49.307 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1381.90 345.48 94793.25 49582.55 150246.90 00:17:49.307 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1502.90 375.72 85332.09 45023.76 146164.72 00:17:49.307 ======================================================== 00:17:49.307 Total : 2884.80 721.20 89864.26 45023.76 150246.90 00:17:49.307 00:17:49.307 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:49.307 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.873 rmmod nvme_tcp 00:17:49.873 rmmod nvme_fabrics 00:17:49.873 rmmod nvme_keyring 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74904 ']' 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74904 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74904 ']' 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74904 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74904 00:17:49.873 killing process with pid 74904 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74904' 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74904 00:17:49.873 04:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74904 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:50.806 00:17:50.806 real 0m14.643s 00:17:50.806 user 0m52.884s 00:17:50.806 sys 0m4.276s 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.806 ************************************ 00:17:50.806 END TEST nvmf_perf 00:17:50.806 ************************************ 00:17:50.806 04:07:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:51.064 04:07:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:51.064 04:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:51.064 04:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.064 04:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.064 ************************************ 00:17:51.065 START TEST nvmf_fio_host 00:17:51.065 ************************************ 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:51.065 * Looking for test storage... 00:17:51.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.065 --rc genhtml_branch_coverage=1 00:17:51.065 --rc genhtml_function_coverage=1 00:17:51.065 --rc genhtml_legend=1 00:17:51.065 --rc geninfo_all_blocks=1 00:17:51.065 --rc geninfo_unexecuted_blocks=1 00:17:51.065 00:17:51.065 ' 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.065 --rc genhtml_branch_coverage=1 00:17:51.065 --rc genhtml_function_coverage=1 00:17:51.065 --rc genhtml_legend=1 00:17:51.065 --rc geninfo_all_blocks=1 00:17:51.065 --rc geninfo_unexecuted_blocks=1 00:17:51.065 00:17:51.065 ' 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.065 --rc genhtml_branch_coverage=1 00:17:51.065 --rc genhtml_function_coverage=1 00:17:51.065 --rc genhtml_legend=1 00:17:51.065 --rc geninfo_all_blocks=1 00:17:51.065 --rc geninfo_unexecuted_blocks=1 00:17:51.065 00:17:51.065 ' 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.065 --rc genhtml_branch_coverage=1 00:17:51.065 --rc genhtml_function_coverage=1 00:17:51.065 --rc genhtml_legend=1 00:17:51.065 --rc geninfo_all_blocks=1 00:17:51.065 --rc geninfo_unexecuted_blocks=1 00:17:51.065 00:17:51.065 ' 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.065 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.066 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.066 04:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:51.066 Cannot find device "nvmf_init_br" 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:51.324 Cannot find device "nvmf_init_br2" 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:51.324 Cannot find device "nvmf_tgt_br" 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.324 Cannot find device "nvmf_tgt_br2" 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:51.324 Cannot find device "nvmf_init_br" 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:51.324 Cannot find device "nvmf_init_br2" 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:51.324 Cannot find device "nvmf_tgt_br" 00:17:51.324 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:51.325 Cannot find device "nvmf_tgt_br2" 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:51.325 Cannot find device "nvmf_br" 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:51.325 Cannot find device "nvmf_init_if" 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:51.325 Cannot find device "nvmf_init_if2" 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:51.325 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:51.597 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:51.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:17:51.598 00:17:51.598 --- 10.0.0.3 ping statistics --- 00:17:51.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.598 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:51.598 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:51.598 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:17:51.598 00:17:51.598 --- 10.0.0.4 ping statistics --- 00:17:51.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.598 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:51.598 00:17:51.598 --- 10.0.0.1 ping statistics --- 00:17:51.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.598 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:51.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:51.598 00:17:51.598 --- 10.0.0.2 ping statistics --- 00:17:51.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.598 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75363 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75363 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75363 ']' 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.598 04:07:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.598 [2024-12-09 04:07:33.476282] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:17:51.598 [2024-12-09 04:07:33.476658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.888 [2024-12-09 04:07:33.633347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.888 [2024-12-09 04:07:33.714390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.888 [2024-12-09 04:07:33.714640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.888 [2024-12-09 04:07:33.714794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.888 [2024-12-09 04:07:33.714925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.888 [2024-12-09 04:07:33.714962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.888 [2024-12-09 04:07:33.718049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.888 [2024-12-09 04:07:33.718252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.888 [2024-12-09 04:07:33.718986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.888 [2024-12-09 04:07:33.719000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.888 [2024-12-09 04:07:33.797737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:52.824 04:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.824 04:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:52.824 04:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:53.082 [2024-12-09 04:07:34.785115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.082 04:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:53.082 04:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.082 04:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.082 04:07:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:53.340 Malloc1 00:17:53.340 04:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:53.598 04:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.165 04:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:54.165 [2024-12-09 04:07:36.079981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.165 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:54.733 04:07:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:54.733 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:54.733 fio-3.35 00:17:54.733 Starting 1 thread 00:17:57.265 00:17:57.265 test: (groupid=0, jobs=1): err= 0: pid=75446: Mon Dec 9 04:07:38 2024 00:17:57.265 read: IOPS=8599, BW=33.6MiB/s (35.2MB/s)(67.4MiB/2007msec) 00:17:57.265 slat (nsec): min=1811, max=335709, avg=2508.33, stdev=3625.65 00:17:57.265 clat (usec): min=2621, max=14063, avg=7767.74, stdev=693.04 00:17:57.265 lat (usec): min=2666, max=14065, avg=7770.24, stdev=692.81 00:17:57.265 clat percentiles (usec): 00:17:57.265 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7242], 00:17:57.265 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:17:57.265 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 8848], 00:17:57.265 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[12387], 99.95th=[13304], 00:17:57.265 | 99.99th=[14091] 00:17:57.265 bw ( KiB/s): min=33720, max=35840, per=99.97%, avg=34388.00, stdev=999.51, samples=4 00:17:57.265 iops : min= 8430, max= 8960, avg=8597.00, stdev=249.88, samples=4 00:17:57.265 write: IOPS=8596, BW=33.6MiB/s (35.2MB/s)(67.4MiB/2007msec); 0 zone resets 00:17:57.265 slat (nsec): min=1921, max=245753, avg=2559.02, stdev=2474.18 00:17:57.265 clat (usec): min=2477, max=13102, avg=7077.68, stdev=630.62 00:17:57.265 lat (usec): min=2492, max=13104, avg=7080.24, stdev=630.51 00:17:57.265 clat percentiles (usec): 00:17:57.265 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6587], 00:17:57.265 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 7111], 00:17:57.265 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8160], 00:17:57.265 | 99.00th=[ 9110], 99.50th=[ 9503], 99.90th=[11076], 99.95th=[12125], 00:17:57.265 | 99.99th=[13042] 00:17:57.265 bw ( KiB/s): min=33496, max=34944, per=100.00%, avg=34386.00, stdev=624.87, samples=4 00:17:57.265 iops : min= 8374, max= 8736, avg=8596.50, stdev=156.22, samples=4 00:17:57.265 lat (msec) : 4=0.08%, 10=99.41%, 20=0.50% 00:17:57.265 cpu : usr=71.83%, sys=21.49%, ctx=27, majf=0, minf=7 00:17:57.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:57.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.265 issued rwts: total=17260,17254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.265 00:17:57.265 Run status group 0 (all jobs): 00:17:57.266 READ: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.4MiB (70.7MB), run=2007-2007msec 00:17:57.266 WRITE: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.4MiB (70.7MB), run=2007-2007msec 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:57.266 04:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.266 04:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:57.266 04:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:57.266 04:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:57.266 04:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:57.266 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:57.266 fio-3.35 00:17:57.266 Starting 1 thread 00:17:59.789 00:17:59.789 test: (groupid=0, jobs=1): err= 0: pid=75489: Mon Dec 9 04:07:41 2024 00:17:59.789 read: IOPS=7861, BW=123MiB/s (129MB/s)(247MiB/2010msec) 00:17:59.789 slat (usec): min=2, max=118, avg= 4.03, stdev= 2.21 00:17:59.789 clat (usec): min=2762, max=18692, avg=9045.30, stdev=2669.78 00:17:59.789 lat (usec): min=2766, max=18695, avg=9049.33, stdev=2669.81 00:17:59.789 clat percentiles (usec): 00:17:59.789 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6587], 00:17:59.789 | 30.00th=[ 7373], 40.00th=[ 8094], 50.00th=[ 8848], 60.00th=[ 9634], 00:17:59.789 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12649], 95.00th=[13566], 00:17:59.789 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17957], 99.95th=[18220], 00:17:59.789 | 99.99th=[18744] 00:17:59.789 bw ( KiB/s): min=53696, max=74144, per=51.97%, avg=65368.00, stdev=9908.34, samples=4 00:17:59.789 iops : min= 3356, max= 4634, avg=4085.50, stdev=619.27, samples=4 00:17:59.789 write: IOPS=4612, BW=72.1MiB/s (75.6MB/s)(133MiB/1848msec); 0 zone resets 00:17:59.789 slat (usec): min=32, max=352, avg=39.18, stdev= 8.72 00:17:59.789 clat (usec): min=4890, max=22457, avg=12667.61, stdev=2463.19 00:17:59.789 lat (usec): min=4960, max=22492, avg=12706.79, stdev=2463.52 00:17:59.789 clat percentiles (usec): 00:17:59.789 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10683], 00:17:59.789 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12256], 60.00th=[12911], 00:17:59.789 | 70.00th=[13566], 80.00th=[14615], 90.00th=[16057], 95.00th=[17171], 00:17:59.789 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21103], 99.95th=[21890], 00:17:59.789 | 99.99th=[22414] 00:17:59.789 bw ( KiB/s): min=55040, max=77408, per=91.89%, avg=67808.00, stdev=10634.50, samples=4 00:17:59.789 iops : min= 3440, max= 4838, avg=4238.00, stdev=664.66, samples=4 00:17:59.789 lat (msec) : 4=0.28%, 10=45.69%, 20=53.81%, 50=0.22% 00:17:59.789 cpu : usr=83.13%, sys=12.34%, ctx=16, majf=0, minf=14 00:17:59.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:59.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.789 issued rwts: total=15802,8523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.790 00:17:59.790 Run status group 0 (all jobs): 00:17:59.790 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=247MiB (259MB), run=2010-2010msec 00:17:59.790 WRITE: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=133MiB (140MB), run=1848-1848msec 00:17:59.790 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:00.047 rmmod nvme_tcp 00:18:00.047 rmmod nvme_fabrics 00:18:00.047 rmmod nvme_keyring 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75363 ']' 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75363 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75363 ']' 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75363 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75363 00:18:00.047 killing process with pid 75363 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75363' 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75363 00:18:00.047 04:07:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75363 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.305 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:00.563 ************************************ 00:18:00.563 00:18:00.563 real 0m9.698s 00:18:00.563 user 0m38.636s 00:18:00.563 sys 0m2.533s 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.563 END TEST nvmf_fio_host 00:18:00.563 ************************************ 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.563 04:07:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.823 ************************************ 00:18:00.823 START TEST nvmf_failover 00:18:00.823 ************************************ 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:00.823 * Looking for test storage... 00:18:00.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.823 --rc genhtml_branch_coverage=1 00:18:00.823 --rc genhtml_function_coverage=1 00:18:00.823 --rc genhtml_legend=1 00:18:00.823 --rc geninfo_all_blocks=1 00:18:00.823 --rc geninfo_unexecuted_blocks=1 00:18:00.823 00:18:00.823 ' 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.823 --rc genhtml_branch_coverage=1 00:18:00.823 --rc genhtml_function_coverage=1 00:18:00.823 --rc genhtml_legend=1 00:18:00.823 --rc geninfo_all_blocks=1 00:18:00.823 --rc geninfo_unexecuted_blocks=1 00:18:00.823 00:18:00.823 ' 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.823 --rc genhtml_branch_coverage=1 00:18:00.823 --rc genhtml_function_coverage=1 00:18:00.823 --rc genhtml_legend=1 00:18:00.823 --rc geninfo_all_blocks=1 00:18:00.823 --rc geninfo_unexecuted_blocks=1 00:18:00.823 00:18:00.823 ' 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.823 --rc genhtml_branch_coverage=1 00:18:00.823 --rc genhtml_function_coverage=1 00:18:00.823 --rc genhtml_legend=1 00:18:00.823 --rc geninfo_all_blocks=1 00:18:00.823 --rc geninfo_unexecuted_blocks=1 00:18:00.823 00:18:00.823 ' 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.823 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.824 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.824 Cannot find device "nvmf_init_br" 00:18:00.824 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:00.825 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.825 Cannot find device "nvmf_init_br2" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:01.083 Cannot find device "nvmf_tgt_br" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.083 Cannot find device "nvmf_tgt_br2" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:01.083 Cannot find device "nvmf_init_br" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:01.083 Cannot find device "nvmf_init_br2" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:01.083 Cannot find device "nvmf_tgt_br" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:01.083 Cannot find device "nvmf_tgt_br2" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:01.083 Cannot find device "nvmf_br" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:01.083 Cannot find device "nvmf_init_if" 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:01.083 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:01.083 Cannot find device "nvmf_init_if2" 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:01.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:01.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:01.084 04:07:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:01.084 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:01.084 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:01.343 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:01.343 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:18:01.343 00:18:01.343 --- 10.0.0.3 ping statistics --- 00:18:01.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.343 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:01.343 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:01.343 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:18:01.343 00:18:01.343 --- 10.0.0.4 ping statistics --- 00:18:01.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.343 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:01.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:01.343 00:18:01.343 --- 10.0.0.1 ping statistics --- 00:18:01.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.343 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:01.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:01.343 00:18:01.343 --- 10.0.0.2 ping statistics --- 00:18:01.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.343 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75771 00:18:01.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75771 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75771 ']' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.343 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:01.343 [2024-12-09 04:07:43.166618] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:18:01.343 [2024-12-09 04:07:43.166707] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.613 [2024-12-09 04:07:43.311007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:01.614 [2024-12-09 04:07:43.376595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.614 [2024-12-09 04:07:43.376668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.614 [2024-12-09 04:07:43.376679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.614 [2024-12-09 04:07:43.376688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.614 [2024-12-09 04:07:43.376695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.614 [2024-12-09 04:07:43.378157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.614 [2024-12-09 04:07:43.378292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.614 [2024-12-09 04:07:43.378300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.614 [2024-12-09 04:07:43.456013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.614 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.614 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:01.614 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.614 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.614 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:01.883 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.883 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:01.883 [2024-12-09 04:07:43.827515] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.143 04:07:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:02.401 Malloc0 00:18:02.402 04:07:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:02.660 04:07:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:02.919 04:07:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:03.177 [2024-12-09 04:07:44.972291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.177 04:07:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:03.435 [2024-12-09 04:07:45.264513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:03.435 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:03.692 [2024-12-09 04:07:45.520691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:03.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75823 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75823 /var/tmp/bdevperf.sock 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75823 ']' 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.692 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.693 04:07:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:05.066 04:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.066 04:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:05.066 04:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:05.066 NVMe0n1 00:18:05.066 04:07:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:05.324 00:18:05.582 04:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75847 00:18:05.582 04:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.582 04:07:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:06.514 04:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:06.772 04:07:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:10.053 04:07:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:10.054 00:18:10.054 04:07:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:10.312 04:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:13.636 04:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:13.636 [2024-12-09 04:07:55.501699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:13.636 04:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:15.009 04:07:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:15.009 04:07:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75847 00:18:21.575 { 00:18:21.575 "results": [ 00:18:21.575 { 00:18:21.575 "job": "NVMe0n1", 00:18:21.575 "core_mask": "0x1", 00:18:21.575 "workload": "verify", 00:18:21.575 "status": "finished", 00:18:21.575 "verify_range": { 00:18:21.575 "start": 0, 00:18:21.575 "length": 16384 00:18:21.575 }, 00:18:21.575 "queue_depth": 128, 00:18:21.575 "io_size": 4096, 00:18:21.575 "runtime": 15.009687, 00:18:21.575 "iops": 9419.583499642598, 00:18:21.575 "mibps": 36.7952480454789, 00:18:21.575 "io_failed": 3397, 00:18:21.575 "io_timeout": 0, 00:18:21.575 "avg_latency_us": 13238.705469213275, 00:18:21.575 "min_latency_us": 629.2945454545454, 00:18:21.575 "max_latency_us": 15013.701818181818 00:18:21.575 } 00:18:21.575 ], 00:18:21.575 "core_count": 1 00:18:21.575 } 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75823 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75823 ']' 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75823 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75823 00:18:21.575 killing process with pid 75823 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75823' 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75823 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75823 00:18:21.575 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:21.576 [2024-12-09 04:07:45.603059] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:18:21.576 [2024-12-09 04:07:45.603224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75823 ] 00:18:21.576 [2024-12-09 04:07:45.754823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.576 [2024-12-09 04:07:45.842347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.576 [2024-12-09 04:07:45.917612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.576 Running I/O for 15 seconds... 00:18:21.576 7189.00 IOPS, 28.08 MiB/s [2024-12-09T04:08:03.526Z] [2024-12-09 04:07:48.553246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.553959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.553990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.554003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.554030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.554044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.554059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.554072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.554087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.554100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.554115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.554129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.554152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.554167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.576 [2024-12-09 04:07:48.554193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.576 [2024-12-09 04:07:48.554207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.554966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.554988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.555002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.555017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.555031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.555046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.555060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.555075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.577 [2024-12-09 04:07:48.555088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.577 [2024-12-09 04:07:48.555103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.578 [2024-12-09 04:07:48.555882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-12-09 04:07:48.555911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-12-09 04:07:48.555942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.578 [2024-12-09 04:07:48.555963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.578 [2024-12-09 04:07:48.555994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-12-09 04:07:48.556459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-12-09 04:07:48.556490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.579 [2024-12-09 04:07:48.556761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.579 [2024-12-09 04:07:48.556956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.579 [2024-12-09 04:07:48.556971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.556986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.580 [2024-12-09 04:07:48.557526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97b00 is same with the state(6) to be set 00:18:21.580 [2024-12-09 04:07:48.557559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.580 [2024-12-09 04:07:48.557587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.580 [2024-12-09 04:07:48.557599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66064 len:8 PRP1 0x0 PRP2 0x0 00:18:21.580 [2024-12-09 04:07:48.557621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557710] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:21.580 [2024-12-09 04:07:48.557776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.580 [2024-12-09 04:07:48.557800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.580 [2024-12-09 04:07:48.557832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.580 [2024-12-09 04:07:48.557862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.580 [2024-12-09 04:07:48.557892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:48.557907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:21.580 [2024-12-09 04:07:48.557970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc26c60 (9): Bad file descriptor 00:18:21.580 [2024-12-09 04:07:48.561911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:21.580 [2024-12-09 04:07:48.592078] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:21.580 7998.00 IOPS, 31.24 MiB/s [2024-12-09T04:08:03.530Z] 8529.33 IOPS, 33.32 MiB/s [2024-12-09T04:08:03.530Z] 8827.00 IOPS, 34.48 MiB/s [2024-12-09T04:08:03.530Z] [2024-12-09 04:07:52.213809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-12-09 04:07:52.213886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:52.213953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-12-09 04:07:52.213972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:52.213988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.580 [2024-12-09 04:07:52.214003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.580 [2024-12-09 04:07:52.214019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-12-09 04:07:52.214049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-12-09 04:07:52.214078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-12-09 04:07:52.214107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-12-09 04:07:52.214136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-12-09 04:07:52.214183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.581 [2024-12-09 04:07:52.214725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-12-09 04:07:52.214757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-12-09 04:07:52.214789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.581 [2024-12-09 04:07:52.214828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.581 [2024-12-09 04:07:52.214845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.214861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.214878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.214893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.214910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.214925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.214942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.214957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.214973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.214988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.215020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.215053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.215084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.215115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.215147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.215178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.215223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.582 [2024-12-09 04:07:52.215264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.582 [2024-12-09 04:07:52.215766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.582 [2024-12-09 04:07:52.215787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.215802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.215819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.215835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.215852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.215867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.215884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.215898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.215915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.215930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.215950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.215965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.215981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.215996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.216453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.216484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.216516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.216548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.216580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.216620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.216656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.583 [2024-12-09 04:07:52.216688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.583 [2024-12-09 04:07:52.216782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.583 [2024-12-09 04:07:52.216801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.216815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.216830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.216845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.216860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.216874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.216890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.216905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.216920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.216935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.216951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.216965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.216980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.216994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.217275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.217306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.217337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.217368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.217414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.217446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.217497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.584 [2024-12-09 04:07:52.217529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.584 [2024-12-09 04:07:52.217766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.584 [2024-12-09 04:07:52.217783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-12-09 04:07:52.217798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.217814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-12-09 04:07:52.217829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.217846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-12-09 04:07:52.217861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.217878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-12-09 04:07:52.217901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.217918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-12-09 04:07:52.217933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.217949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-12-09 04:07:52.217964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.217980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-12-09 04:07:52.217995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.585 [2024-12-09 04:07:52.218059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd5790 is same with the state(6) to be set 00:18:21.585 [2024-12-09 04:07:52.218091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91208 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91664 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91672 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91680 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91688 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91696 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91704 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91712 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.585 [2024-12-09 04:07:52.218571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.585 [2024-12-09 04:07:52.218582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91720 len:8 PRP1 0x0 PRP2 0x0 00:18:21.585 [2024-12-09 04:07:52.218596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218671] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:21.585 [2024-12-09 04:07:52.218734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.585 [2024-12-09 04:07:52.218757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.585 [2024-12-09 04:07:52.218803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.585 [2024-12-09 04:07:52.218831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.585 [2024-12-09 04:07:52.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.585 [2024-12-09 04:07:52.218875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:21.585 [2024-12-09 04:07:52.218927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc26c60 (9): Bad file descriptor 00:18:21.586 [2024-12-09 04:07:52.222866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:21.586 [2024-12-09 04:07:52.245202] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:21.586 8886.80 IOPS, 34.71 MiB/s [2024-12-09T04:08:03.536Z] 9041.50 IOPS, 35.32 MiB/s [2024-12-09T04:08:03.536Z] 9148.71 IOPS, 35.74 MiB/s [2024-12-09T04:08:03.536Z] 9233.12 IOPS, 36.07 MiB/s [2024-12-09T04:08:03.536Z] 9284.56 IOPS, 36.27 MiB/s [2024-12-09T04:08:03.536Z] [2024-12-09 04:07:56.801641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.801751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.801786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.801805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.801822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.801838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.801855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.801870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.801887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.801903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.801919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.801935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.801951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.801966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.801983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.801999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.586 [2024-12-09 04:07:56.802831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.802967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.802981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.586 [2024-12-09 04:07:56.803025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.586 [2024-12-09 04:07:56.803041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.587 [2024-12-09 04:07:56.803846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.803981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.803996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.804011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.804025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.804040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.804054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.804069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.587 [2024-12-09 04:07:56.804083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.587 [2024-12-09 04:07:56.804099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.804884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.804973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.804989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.805003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.805041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.805070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.805099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.588 [2024-12-09 04:07:56.805128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.805164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.805203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.805236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.805267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.805296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.805326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.805355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.588 [2024-12-09 04:07:56.805385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.588 [2024-12-09 04:07:56.805401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.589 [2024-12-09 04:07:56.805423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.589 [2024-12-09 04:07:56.805455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.589 [2024-12-09 04:07:56.805485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.589 [2024-12-09 04:07:56.805515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.589 [2024-12-09 04:07:56.805544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.589 [2024-12-09 04:07:56.805575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.589 [2024-12-09 04:07:56.805604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd5450 is same with the state(6) to be set 00:18:21.589 [2024-12-09 04:07:56.805635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.805646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.805663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61104 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.805678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.805703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.805723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61560 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.805755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.805786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.805797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61568 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.805811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.805837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.805856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61576 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.805871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.805897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.805908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61584 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.805923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.805948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.805959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61592 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.805973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.805987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.805998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.806009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.806023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.806037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.806062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.806087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61608 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.806101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.806114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.589 [2024-12-09 04:07:56.806124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.589 [2024-12-09 04:07:56.806141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61616 len:8 PRP1 0x0 PRP2 0x0 00:18:21.589 [2024-12-09 04:07:56.806154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.806239] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:21.589 [2024-12-09 04:07:56.806302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.589 [2024-12-09 04:07:56.806324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.806340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.589 [2024-12-09 04:07:56.806353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.806368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.589 [2024-12-09 04:07:56.806381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.806407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.589 [2024-12-09 04:07:56.806421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.589 [2024-12-09 04:07:56.806435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:21.589 [2024-12-09 04:07:56.810201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:21.589 [2024-12-09 04:07:56.810254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc26c60 (9): Bad file descriptor 00:18:21.589 [2024-12-09 04:07:56.839530] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:21.589 9274.50 IOPS, 36.23 MiB/s [2024-12-09T04:08:03.539Z] 9327.36 IOPS, 36.44 MiB/s [2024-12-09T04:08:03.539Z] 9374.75 IOPS, 36.62 MiB/s [2024-12-09T04:08:03.539Z] 9396.23 IOPS, 36.70 MiB/s [2024-12-09T04:08:03.539Z] 9409.79 IOPS, 36.76 MiB/s [2024-12-09T04:08:03.539Z] 9419.00 IOPS, 36.79 MiB/s 00:18:21.589 Latency(us) 00:18:21.589 [2024-12-09T04:08:03.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.589 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:21.589 Verification LBA range: start 0x0 length 0x4000 00:18:21.589 NVMe0n1 : 15.01 9419.58 36.80 226.32 0.00 13238.71 629.29 15013.70 00:18:21.589 [2024-12-09T04:08:03.539Z] =================================================================================================================== 00:18:21.589 [2024-12-09T04:08:03.539Z] Total : 9419.58 36.80 226.32 0.00 13238.71 629.29 15013.70 00:18:21.589 Received shutdown signal, test time was about 15.000000 seconds 00:18:21.589 00:18:21.589 Latency(us) 00:18:21.589 [2024-12-09T04:08:03.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.589 [2024-12-09T04:08:03.539Z] =================================================================================================================== 00:18:21.589 [2024-12-09T04:08:03.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:21.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76023 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76023 /var/tmp/bdevperf.sock 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 76023 ']' 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.589 04:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:21.589 04:08:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.589 04:08:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:21.590 04:08:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:21.590 [2024-12-09 04:08:03.482718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:21.590 04:08:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:21.872 [2024-12-09 04:08:03.783026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:21.872 04:08:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:22.439 NVMe0n1 00:18:22.439 04:08:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:22.696 00:18:22.696 04:08:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:22.954 00:18:22.955 04:08:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:22.955 04:08:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:23.218 04:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:23.479 04:08:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:26.761 04:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:26.762 04:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:26.762 04:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76099 00:18:26.762 04:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.762 04:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 76099 00:18:28.138 { 00:18:28.138 "results": [ 00:18:28.138 { 00:18:28.138 "job": "NVMe0n1", 00:18:28.138 "core_mask": "0x1", 00:18:28.138 "workload": "verify", 00:18:28.138 "status": "finished", 00:18:28.138 "verify_range": { 00:18:28.138 "start": 0, 00:18:28.138 "length": 16384 00:18:28.138 }, 00:18:28.138 "queue_depth": 128, 00:18:28.138 "io_size": 4096, 00:18:28.138 "runtime": 1.006047, 00:18:28.138 "iops": 7033.468615283381, 00:18:28.138 "mibps": 27.474486778450707, 00:18:28.138 "io_failed": 0, 00:18:28.138 "io_timeout": 0, 00:18:28.138 "avg_latency_us": 18105.149872038648, 00:18:28.138 "min_latency_us": 1064.96, 00:18:28.138 "max_latency_us": 15966.952727272726 00:18:28.138 } 00:18:28.138 ], 00:18:28.138 "core_count": 1 00:18:28.138 } 00:18:28.138 04:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:28.138 [2024-12-09 04:08:02.828453] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:18:28.138 [2024-12-09 04:08:02.828575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76023 ] 00:18:28.138 [2024-12-09 04:08:02.975487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.138 [2024-12-09 04:08:03.048940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.138 [2024-12-09 04:08:03.124764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:28.139 [2024-12-09 04:08:05.309053] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:28.139 [2024-12-09 04:08:05.309213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.139 [2024-12-09 04:08:05.309242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.139 [2024-12-09 04:08:05.309264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.139 [2024-12-09 04:08:05.309279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.139 [2024-12-09 04:08:05.309294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.139 [2024-12-09 04:08:05.309308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.139 [2024-12-09 04:08:05.309324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.139 [2024-12-09 04:08:05.309338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.139 [2024-12-09 04:08:05.309353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:28.139 [2024-12-09 04:08:05.309413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:28.139 [2024-12-09 04:08:05.309459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1934c60 (9): Bad file descriptor 00:18:28.139 [2024-12-09 04:08:05.313957] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:28.139 Running I/O for 1 seconds... 00:18:28.139 6933.00 IOPS, 27.08 MiB/s 00:18:28.139 Latency(us) 00:18:28.139 [2024-12-09T04:08:10.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.139 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:28.139 Verification LBA range: start 0x0 length 0x4000 00:18:28.139 NVMe0n1 : 1.01 7033.47 27.47 0.00 0.00 18105.15 1064.96 15966.95 00:18:28.139 [2024-12-09T04:08:10.089Z] =================================================================================================================== 00:18:28.139 [2024-12-09T04:08:10.089Z] Total : 7033.47 27.47 0.00 0.00 18105.15 1064.96 15966.95 00:18:28.139 04:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:28.139 04:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:28.397 04:08:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:28.655 04:08:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:28.655 04:08:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:28.913 04:08:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:29.170 04:08:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 76023 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 76023 ']' 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 76023 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76023 00:18:32.456 killing process with pid 76023 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76023' 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 76023 00:18:32.456 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 76023 00:18:32.712 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:32.713 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.279 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:33.279 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:33.279 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:33.279 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.279 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:33.279 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.279 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.280 rmmod nvme_tcp 00:18:33.280 rmmod nvme_fabrics 00:18:33.280 rmmod nvme_keyring 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75771 ']' 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75771 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75771 ']' 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75771 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.280 04:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75771 00:18:33.280 killing process with pid 75771 00:18:33.280 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.280 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.280 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75771' 00:18:33.280 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75771 00:18:33.280 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75771 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:33.543 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:33.801 00:18:33.801 real 0m33.113s 00:18:33.801 user 2m7.771s 00:18:33.801 sys 0m5.844s 00:18:33.801 ************************************ 00:18:33.801 END TEST nvmf_failover 00:18:33.801 ************************************ 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.801 ************************************ 00:18:33.801 START TEST nvmf_host_discovery 00:18:33.801 ************************************ 00:18:33.801 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:34.060 * Looking for test storage... 00:18:34.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.060 --rc genhtml_branch_coverage=1 00:18:34.060 --rc genhtml_function_coverage=1 00:18:34.060 --rc genhtml_legend=1 00:18:34.060 --rc geninfo_all_blocks=1 00:18:34.060 --rc geninfo_unexecuted_blocks=1 00:18:34.060 00:18:34.060 ' 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.060 --rc genhtml_branch_coverage=1 00:18:34.060 --rc genhtml_function_coverage=1 00:18:34.060 --rc genhtml_legend=1 00:18:34.060 --rc geninfo_all_blocks=1 00:18:34.060 --rc geninfo_unexecuted_blocks=1 00:18:34.060 00:18:34.060 ' 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.060 --rc genhtml_branch_coverage=1 00:18:34.060 --rc genhtml_function_coverage=1 00:18:34.060 --rc genhtml_legend=1 00:18:34.060 --rc geninfo_all_blocks=1 00:18:34.060 --rc geninfo_unexecuted_blocks=1 00:18:34.060 00:18:34.060 ' 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.060 --rc genhtml_branch_coverage=1 00:18:34.060 --rc genhtml_function_coverage=1 00:18:34.060 --rc genhtml_legend=1 00:18:34.060 --rc geninfo_all_blocks=1 00:18:34.060 --rc geninfo_unexecuted_blocks=1 00:18:34.060 00:18:34.060 ' 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.060 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:34.061 Cannot find device "nvmf_init_br" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:34.061 Cannot find device "nvmf_init_br2" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:34.061 Cannot find device "nvmf_tgt_br" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.061 Cannot find device "nvmf_tgt_br2" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:34.061 Cannot find device "nvmf_init_br" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:34.061 Cannot find device "nvmf_init_br2" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:34.061 Cannot find device "nvmf_tgt_br" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:34.061 Cannot find device "nvmf_tgt_br2" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:34.061 Cannot find device "nvmf_br" 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:34.061 04:08:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:34.320 Cannot find device "nvmf_init_if" 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:34.320 Cannot find device "nvmf_init_if2" 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:34.320 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:34.321 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.321 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:18:34.321 00:18:34.321 --- 10.0.0.3 ping statistics --- 00:18:34.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.321 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:34.321 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:34.321 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:34.321 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:18:34.321 00:18:34.321 --- 10.0.0.4 ping statistics --- 00:18:34.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.321 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:34.321 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:18:34.321 00:18:34.321 --- 10.0.0.1 ping statistics --- 00:18:34.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.321 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:18:34.321 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:34.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:18:34.579 00:18:34.579 --- 10.0.0.2 ping statistics --- 00:18:34.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.579 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76425 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76425 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76425 ']' 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.579 04:08:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.579 [2024-12-09 04:08:16.365318] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:18:34.579 [2024-12-09 04:08:16.366485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.579 [2024-12-09 04:08:16.508832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.838 [2024-12-09 04:08:16.568148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.838 [2024-12-09 04:08:16.568223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.838 [2024-12-09 04:08:16.568251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.838 [2024-12-09 04:08:16.568260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.838 [2024-12-09 04:08:16.568266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.838 [2024-12-09 04:08:16.568671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.838 [2024-12-09 04:08:16.645241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.775 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.775 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:35.775 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.776 [2024-12-09 04:08:17.410490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.776 [2024-12-09 04:08:17.418674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.776 null0 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.776 null1 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.776 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76463 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76463 /tmp/host.sock 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76463 ']' 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.776 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.776 [2024-12-09 04:08:17.516205] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:18:35.776 [2024-12-09 04:08:17.516572] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76463 ] 00:18:35.776 [2024-12-09 04:08:17.667798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.034 [2024-12-09 04:08:17.741995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.034 [2024-12-09 04:08:17.818675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.034 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.292 04:08:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.292 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.551 [2024-12-09 04:08:18.270833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:36.551 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.552 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.810 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:36.810 04:08:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:37.069 [2024-12-09 04:08:18.924049] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:37.069 [2024-12-09 04:08:18.924081] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:37.069 [2024-12-09 04:08:18.924124] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:37.069 [2024-12-09 04:08:18.930127] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:37.069 [2024-12-09 04:08:18.984673] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:37.069 [2024-12-09 04:08:18.985845] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12dfda0:1 started. 00:18:37.069 [2024-12-09 04:08:18.987838] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:37.069 [2024-12-09 04:08:18.987860] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:37.069 [2024-12-09 04:08:18.992526] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12dfda0 was disconnected and freed. delete nvme_qpair. 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.637 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.638 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.900 [2024-12-09 04:08:19.746985] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12ee190:1 started. 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.900 [2024-12-09 04:08:19.753122] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12ee190 was disconnected and freed. delete nvme_qpair. 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:37.900 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:37.901 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:37.901 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.901 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:37.901 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.901 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.160 [2024-12-09 04:08:19.864399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:38.160 [2024-12-09 04:08:19.865403] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:38.160 [2024-12-09 04:08:19.865577] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:38.160 [2024-12-09 04:08:19.871410] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.160 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:38.161 [2024-12-09 04:08:19.929915] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:38.161 [2024-12-09 04:08:19.929963] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:38.161 [2024-12-09 04:08:19.929975] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:38.161 [2024-12-09 04:08:19.929981] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:38.161 04:08:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 [2024-12-09 04:08:20.088823] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:38.161 [2024-12-09 04:08:20.089019] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:38.161 [2024-12-09 04:08:20.094836] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:38.161 [2024-12-09 04:08:20.094861] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:38.161 [2024-12-09 04:08:20.095004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.161 [2024-12-09 04:08:20.095046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.161 [2024-12-09 04:08:20.095084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.161 [2024-12-09 04:08:20.095094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.161 [2024-12-09 04:08:20.095104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.161 [2024-12-09 04:08:20.095113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.161 [2024-12-09 04:08:20.095124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.161 [2024-12-09 04:08:20.095133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.161 [2024-12-09 04:08:20.095142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbfb0 is same with the state(6) to be set 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.161 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.162 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:38.162 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:38.422 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.682 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.683 04:08:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.620 [2024-12-09 04:08:21.504707] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:39.620 [2024-12-09 04:08:21.504919] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:39.620 [2024-12-09 04:08:21.504983] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:39.620 [2024-12-09 04:08:21.510742] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:39.880 [2024-12-09 04:08:21.569246] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:39.880 [2024-12-09 04:08:21.570432] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x12e2c00:1 started. 00:18:39.880 [2024-12-09 04:08:21.572856] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:39.880 [2024-12-09 04:08:21.572895] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.880 [2024-12-09 04:08:21.574270] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x12e2c00 was disconnected and freed. delete nvme_qpair. 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.880 request: 00:18:39.880 { 00:18:39.880 "name": "nvme", 00:18:39.880 "trtype": "tcp", 00:18:39.880 "traddr": "10.0.0.3", 00:18:39.880 "adrfam": "ipv4", 00:18:39.880 "trsvcid": "8009", 00:18:39.880 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:39.880 "wait_for_attach": true, 00:18:39.880 "method": "bdev_nvme_start_discovery", 00:18:39.880 "req_id": 1 00:18:39.880 } 00:18:39.880 Got JSON-RPC error response 00:18:39.880 response: 00:18:39.880 { 00:18:39.880 "code": -17, 00:18:39.880 "message": "File exists" 00:18:39.880 } 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.880 request: 00:18:39.880 { 00:18:39.880 "name": "nvme_second", 00:18:39.880 "trtype": "tcp", 00:18:39.880 "traddr": "10.0.0.3", 00:18:39.880 "adrfam": "ipv4", 00:18:39.880 "trsvcid": "8009", 00:18:39.880 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:39.880 "wait_for_attach": true, 00:18:39.880 "method": "bdev_nvme_start_discovery", 00:18:39.880 "req_id": 1 00:18:39.880 } 00:18:39.880 Got JSON-RPC error response 00:18:39.880 response: 00:18:39.880 { 00:18:39.880 "code": -17, 00:18:39.880 "message": "File exists" 00:18:39.880 } 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:39.880 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:39.881 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.140 04:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:41.076 [2024-12-09 04:08:22.849203] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.076 [2024-12-09 04:08:22.849466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dfbb0 with addr=10.0.0.3, port=8010 00:18:41.076 [2024-12-09 04:08:22.849504] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:41.076 [2024-12-09 04:08:22.849516] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:41.077 [2024-12-09 04:08:22.849526] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:42.012 [2024-12-09 04:08:23.849192] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.012 [2024-12-09 04:08:23.849275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ef5d0 with addr=10.0.0.3, port=8010 00:18:42.012 [2024-12-09 04:08:23.849303] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:42.012 [2024-12-09 04:08:23.849314] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:42.012 [2024-12-09 04:08:23.849322] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:42.947 [2024-12-09 04:08:24.849046] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:42.947 request: 00:18:42.947 { 00:18:42.947 "name": "nvme_second", 00:18:42.947 "trtype": "tcp", 00:18:42.947 "traddr": "10.0.0.3", 00:18:42.947 "adrfam": "ipv4", 00:18:42.947 "trsvcid": "8010", 00:18:42.947 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:42.947 "wait_for_attach": false, 00:18:42.947 "attach_timeout_ms": 3000, 00:18:42.947 "method": "bdev_nvme_start_discovery", 00:18:42.947 "req_id": 1 00:18:42.947 } 00:18:42.947 Got JSON-RPC error response 00:18:42.947 response: 00:18:42.947 { 00:18:42.947 "code": -110, 00:18:42.947 "message": "Connection timed out" 00:18:42.947 } 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:42.947 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76463 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.205 04:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.205 rmmod nvme_tcp 00:18:43.205 rmmod nvme_fabrics 00:18:43.205 rmmod nvme_keyring 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76425 ']' 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76425 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76425 ']' 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76425 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.205 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76425 00:18:43.205 killing process with pid 76425 00:18:43.206 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.206 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.206 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76425' 00:18:43.206 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76425 00:18:43.206 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76425 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:43.464 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:43.722 00:18:43.722 real 0m9.919s 00:18:43.722 user 0m18.319s 00:18:43.722 sys 0m2.080s 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.722 ************************************ 00:18:43.722 END TEST nvmf_host_discovery 00:18:43.722 ************************************ 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.722 ************************************ 00:18:43.722 START TEST nvmf_host_multipath_status 00:18:43.722 ************************************ 00:18:43.722 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:43.981 * Looking for test storage... 00:18:43.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:43.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.981 --rc genhtml_branch_coverage=1 00:18:43.981 --rc genhtml_function_coverage=1 00:18:43.981 --rc genhtml_legend=1 00:18:43.981 --rc geninfo_all_blocks=1 00:18:43.981 --rc geninfo_unexecuted_blocks=1 00:18:43.981 00:18:43.981 ' 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:43.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.981 --rc genhtml_branch_coverage=1 00:18:43.981 --rc genhtml_function_coverage=1 00:18:43.981 --rc genhtml_legend=1 00:18:43.981 --rc geninfo_all_blocks=1 00:18:43.981 --rc geninfo_unexecuted_blocks=1 00:18:43.981 00:18:43.981 ' 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:43.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.981 --rc genhtml_branch_coverage=1 00:18:43.981 --rc genhtml_function_coverage=1 00:18:43.981 --rc genhtml_legend=1 00:18:43.981 --rc geninfo_all_blocks=1 00:18:43.981 --rc geninfo_unexecuted_blocks=1 00:18:43.981 00:18:43.981 ' 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:43.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.981 --rc genhtml_branch_coverage=1 00:18:43.981 --rc genhtml_function_coverage=1 00:18:43.981 --rc genhtml_legend=1 00:18:43.981 --rc geninfo_all_blocks=1 00:18:43.981 --rc geninfo_unexecuted_blocks=1 00:18:43.981 00:18:43.981 ' 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.981 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.982 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:43.982 Cannot find device "nvmf_init_br" 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:43.982 Cannot find device "nvmf_init_br2" 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:43.982 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:44.241 Cannot find device "nvmf_tgt_br" 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.241 Cannot find device "nvmf_tgt_br2" 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:44.241 Cannot find device "nvmf_init_br" 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:44.241 Cannot find device "nvmf_init_br2" 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:44.241 Cannot find device "nvmf_tgt_br" 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:44.241 Cannot find device "nvmf_tgt_br2" 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:44.241 04:08:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:44.241 Cannot find device "nvmf_br" 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:44.241 Cannot find device "nvmf_init_if" 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:44.241 Cannot find device "nvmf_init_if2" 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:44.241 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:44.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:44.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:18:44.500 00:18:44.500 --- 10.0.0.3 ping statistics --- 00:18:44.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.500 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:44.500 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:44.500 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:18:44.500 00:18:44.500 --- 10.0.0.4 ping statistics --- 00:18:44.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.500 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:44.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:18:44.500 00:18:44.500 --- 10.0.0.1 ping statistics --- 00:18:44.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.500 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:44.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:18:44.500 00:18:44.500 --- 10.0.0.2 ping statistics --- 00:18:44.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.500 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76953 00:18:44.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76953 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76953 ']' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.500 04:08:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:44.500 [2024-12-09 04:08:26.395116] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:18:44.500 [2024-12-09 04:08:26.395254] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.759 [2024-12-09 04:08:26.544814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.759 [2024-12-09 04:08:26.627649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.759 [2024-12-09 04:08:26.627936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.759 [2024-12-09 04:08:26.628013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.759 [2024-12-09 04:08:26.628095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.759 [2024-12-09 04:08:26.628159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.759 [2024-12-09 04:08:26.629767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.759 [2024-12-09 04:08:26.629773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.017 [2024-12-09 04:08:26.711534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.582 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.582 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:45.582 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.582 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.582 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:45.582 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.582 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76953 00:18:45.582 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:45.841 [2024-12-09 04:08:27.756885] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.841 04:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:46.099 Malloc0 00:18:46.358 04:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:46.616 04:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.882 04:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:47.154 [2024-12-09 04:08:28.912484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:47.154 04:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:47.412 [2024-12-09 04:08:29.156670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77014 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77014 /var/tmp/bdevperf.sock 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 77014 ']' 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.412 04:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:48.345 04:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.346 04:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:48.346 04:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:48.604 04:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:49.192 Nvme0n1 00:18:49.192 04:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:49.450 Nvme0n1 00:18:49.450 04:08:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:49.450 04:08:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:51.349 04:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:51.349 04:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:51.607 04:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:51.865 04:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:53.245 04:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:53.245 04:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:53.245 04:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.245 04:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:53.245 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.245 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:53.245 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.245 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:53.503 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:53.503 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:53.503 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.503 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:54.068 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.068 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:54.068 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.068 04:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:54.326 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.326 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:54.326 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:54.326 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.584 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.584 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:54.584 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.584 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:54.842 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.842 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:54.842 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:55.101 04:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:55.360 04:08:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:56.333 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:56.333 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:56.333 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.333 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:56.901 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:56.901 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:56.901 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.901 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:57.160 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.160 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:57.160 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:57.160 04:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.419 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.419 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:57.419 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:57.419 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.678 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.678 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:57.678 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.678 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:57.937 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.937 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:57.937 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.937 04:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:58.196 04:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.196 04:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:58.196 04:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:58.765 04:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:58.765 04:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:00.140 04:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:00.140 04:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:00.140 04:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.140 04:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:00.140 04:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.140 04:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:00.140 04:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.140 04:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:00.399 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:00.399 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:00.399 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.399 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:00.658 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.658 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:00.658 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:00.658 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.917 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.917 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:00.917 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.917 04:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:01.175 04:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.175 04:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:01.175 04:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:01.175 04:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.743 04:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.743 04:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:01.743 04:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:02.002 04:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:02.260 04:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:03.194 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:03.194 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:03.194 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.194 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:03.450 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.450 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:03.451 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.451 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:03.708 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:03.708 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:03.708 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.708 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:03.965 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.965 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:03.965 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.965 04:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:04.223 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.223 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:04.223 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.223 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:04.481 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.481 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:04.481 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:04.481 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.738 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.738 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:04.738 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:04.995 04:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:05.560 04:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:06.491 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:06.491 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:06.492 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.492 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:06.750 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:06.750 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:06.750 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:06.750 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.009 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.009 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.009 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.009 04:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.267 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.267 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.267 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.267 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:07.525 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.525 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:07.525 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.525 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:07.785 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.785 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:07.785 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:07.785 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.044 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:08.044 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:08.044 04:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:08.303 04:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:08.561 04:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:09.497 04:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:09.497 04:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:09.497 04:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.497 04:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:10.065 04:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:10.065 04:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:10.065 04:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.065 04:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:10.324 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.324 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:10.324 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.324 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:10.582 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.582 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:10.582 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.582 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:10.841 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.841 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:10.841 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.841 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.099 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:11.099 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:11.099 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.099 04:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:11.358 04:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.358 04:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:11.924 04:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:11.924 04:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:12.181 04:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:12.438 04:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:13.370 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:13.370 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:13.370 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.370 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:13.934 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.934 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:13.934 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:13.934 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.192 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.192 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:14.192 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.192 04:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:14.452 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.452 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:14.452 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:14.452 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.710 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.710 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:14.710 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.710 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:14.969 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.969 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:14.969 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:14.969 04:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.227 04:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.227 04:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:15.227 04:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:15.792 04:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:15.792 04:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:17.166 04:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:17.166 04:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:17.166 04:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.166 04:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:17.166 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:17.166 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:17.166 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:17.166 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.731 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.731 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:17.731 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.731 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.989 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.989 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.989 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.989 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:18.247 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.248 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:18.248 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.248 04:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:18.506 04:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.506 04:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:18.506 04:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:18.506 04:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.765 04:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.765 04:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:18.765 04:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:19.023 04:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:19.282 04:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:20.681 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:20.681 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:20.681 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.681 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:20.681 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.681 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:20.681 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.681 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:20.939 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.939 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:20.939 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.939 04:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:21.198 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.198 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:21.198 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.198 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:21.457 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.457 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:21.457 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.457 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:22.023 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.023 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:22.023 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.023 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:22.281 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.281 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:22.281 04:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:22.539 04:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:22.539 04:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:23.913 04:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:23.913 04:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:23.913 04:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:23.913 04:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.913 04:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.913 04:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:23.913 04:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.913 04:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:24.170 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:24.170 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:24.170 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.170 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:24.430 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.430 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:24.430 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.430 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:24.735 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.735 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:24.735 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.735 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:24.993 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.993 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:24.993 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.993 04:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77014 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 77014 ']' 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 77014 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77014 00:19:25.251 killing process with pid 77014 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77014' 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 77014 00:19:25.251 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 77014 00:19:25.251 { 00:19:25.251 "results": [ 00:19:25.251 { 00:19:25.251 "job": "Nvme0n1", 00:19:25.251 "core_mask": "0x4", 00:19:25.251 "workload": "verify", 00:19:25.251 "status": "terminated", 00:19:25.251 "verify_range": { 00:19:25.251 "start": 0, 00:19:25.251 "length": 16384 00:19:25.251 }, 00:19:25.251 "queue_depth": 128, 00:19:25.251 "io_size": 4096, 00:19:25.251 "runtime": 35.798433, 00:19:25.251 "iops": 8718.92912184173, 00:19:25.251 "mibps": 34.058316882194255, 00:19:25.251 "io_failed": 0, 00:19:25.251 "io_timeout": 0, 00:19:25.251 "avg_latency_us": 14649.203898963231, 00:19:25.251 "min_latency_us": 577.1636363636363, 00:19:25.251 "max_latency_us": 4026531.84 00:19:25.251 } 00:19:25.251 ], 00:19:25.251 "core_count": 1 00:19:25.251 } 00:19:25.513 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77014 00:19:25.513 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.513 [2024-12-09 04:08:29.231379] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:19:25.513 [2024-12-09 04:08:29.231506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77014 ] 00:19:25.513 [2024-12-09 04:08:29.381490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.513 [2024-12-09 04:08:29.462645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.513 [2024-12-09 04:08:29.540490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:25.513 Running I/O for 90 seconds... 00:19:25.513 6692.00 IOPS, 26.14 MiB/s [2024-12-09T04:09:07.463Z] 6858.50 IOPS, 26.79 MiB/s [2024-12-09T04:09:07.463Z] 6833.67 IOPS, 26.69 MiB/s [2024-12-09T04:09:07.463Z] 6853.25 IOPS, 26.77 MiB/s [2024-12-09T04:09:07.463Z] 6791.20 IOPS, 26.53 MiB/s [2024-12-09T04:09:07.463Z] 6936.50 IOPS, 27.10 MiB/s [2024-12-09T04:09:07.463Z] 7243.86 IOPS, 28.30 MiB/s [2024-12-09T04:09:07.463Z] 7448.25 IOPS, 29.09 MiB/s [2024-12-09T04:09:07.463Z] 7638.44 IOPS, 29.84 MiB/s [2024-12-09T04:09:07.463Z] 7888.10 IOPS, 30.81 MiB/s [2024-12-09T04:09:07.463Z] 8123.00 IOPS, 31.73 MiB/s [2024-12-09T04:09:07.463Z] 8312.67 IOPS, 32.47 MiB/s [2024-12-09T04:09:07.463Z] 8482.92 IOPS, 33.14 MiB/s [2024-12-09T04:09:07.463Z] 8625.64 IOPS, 33.69 MiB/s [2024-12-09T04:09:07.463Z] 8747.67 IOPS, 34.17 MiB/s [2024-12-09T04:09:07.463Z] [2024-12-09 04:08:46.902471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.902976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.902991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.903023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-12-09 04:08:46.903057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-12-09 04:08:46.903090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-12-09 04:08:46.903123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-12-09 04:08:46.903157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-12-09 04:08:46.903208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-12-09 04:08:46.903245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-12-09 04:08:46.903278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-12-09 04:08:46.903312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.513 [2024-12-09 04:08:46.903345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.513 [2024-12-09 04:08:46.903375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.903735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-12-09 04:08:46.903772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-12-09 04:08:46.903815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-12-09 04:08:46.903850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-12-09 04:08:46.903883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-12-09 04:08:46.903917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-12-09 04:08:46.903951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.903970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-12-09 04:08:46.903985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-12-09 04:08:46.904019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.514 [2024-12-09 04:08:46.904685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.514 [2024-12-09 04:08:46.904700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.904727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.904743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.904762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.904777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.904796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.904821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.904840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.904854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.904874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.904889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.904908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.904923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.904942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.904957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.904976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.904991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.515 [2024-12-09 04:08:46.905834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.905977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.905999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.906015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.906036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.906052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.906073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.906089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.906109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.906133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.906155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.906185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.515 [2024-12-09 04:08:46.906249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.515 [2024-12-09 04:08:46.906267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.906286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.516 [2024-12-09 04:08:46.906300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.906320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.516 [2024-12-09 04:08:46.906334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.906353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.516 [2024-12-09 04:08:46.906368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.906387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.516 [2024-12-09 04:08:46.906401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.906420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.516 [2024-12-09 04:08:46.906435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.906454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.516 [2024-12-09 04:08:46.906468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.516 [2024-12-09 04:08:46.907183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.907935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.907981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.908026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.908067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.908107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.908149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.908189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.908231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.908302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:08:46.908345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:08:46.908362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.516 8570.44 IOPS, 33.48 MiB/s [2024-12-09T04:09:07.466Z] 8066.29 IOPS, 31.51 MiB/s [2024-12-09T04:09:07.466Z] 7618.17 IOPS, 29.76 MiB/s [2024-12-09T04:09:07.466Z] 7217.21 IOPS, 28.19 MiB/s [2024-12-09T04:09:07.466Z] 7085.20 IOPS, 27.68 MiB/s [2024-12-09T04:09:07.466Z] 7189.29 IOPS, 28.08 MiB/s [2024-12-09T04:09:07.466Z] 7272.68 IOPS, 28.41 MiB/s [2024-12-09T04:09:07.466Z] 7400.30 IOPS, 28.91 MiB/s [2024-12-09T04:09:07.466Z] 7589.12 IOPS, 29.65 MiB/s [2024-12-09T04:09:07.466Z] 7760.24 IOPS, 30.31 MiB/s [2024-12-09T04:09:07.466Z] 7952.31 IOPS, 31.06 MiB/s [2024-12-09T04:09:07.466Z] 8010.96 IOPS, 31.29 MiB/s [2024-12-09T04:09:07.466Z] 8059.14 IOPS, 31.48 MiB/s [2024-12-09T04:09:07.466Z] 8102.34 IOPS, 31.65 MiB/s [2024-12-09T04:09:07.466Z] 8170.37 IOPS, 31.92 MiB/s [2024-12-09T04:09:07.466Z] 8315.74 IOPS, 32.48 MiB/s [2024-12-09T04:09:07.466Z] 8446.12 IOPS, 32.99 MiB/s [2024-12-09T04:09:07.466Z] 8589.27 IOPS, 33.55 MiB/s [2024-12-09T04:09:07.466Z] [2024-12-09 04:09:04.464072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.516 [2024-12-09 04:09:04.464155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:09:04.464273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:09:04.464335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.516 [2024-12-09 04:09:04.464362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.516 [2024-12-09 04:09:04.464379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.464573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.464608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.464657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.464691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.464724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.464941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.464975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.464995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.465009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.465044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.465077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.465111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.465146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.465196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.465264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.465313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.465349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.465385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.465422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.517 [2024-12-09 04:09:04.465494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.465529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.517 [2024-12-09 04:09:04.465582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.517 [2024-12-09 04:09:04.465635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.465654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.465690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.465724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.465758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.465803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.465840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.465874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.465940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.465976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.465997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.466217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.466255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.466300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.466337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.466374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.466410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.466446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.466602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.466616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.468345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.468390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.468426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.468475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.468531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.468581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.468617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.468652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.468686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.468720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.468754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.468788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.518 [2024-12-09 04:09:04.468822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.518 [2024-12-09 04:09:04.468842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.518 [2024-12-09 04:09:04.468857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.518 8654.94 IOPS, 33.81 MiB/s [2024-12-09T04:09:07.468Z] 8695.66 IOPS, 33.97 MiB/s [2024-12-09T04:09:07.468Z] Received shutdown signal, test time was about 35.799246 seconds 00:19:25.518 00:19:25.519 Latency(us) 00:19:25.519 [2024-12-09T04:09:07.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.519 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.519 Verification LBA range: start 0x0 length 0x4000 00:19:25.519 Nvme0n1 : 35.80 8718.93 34.06 0.00 0.00 14649.20 577.16 4026531.84 00:19:25.519 [2024-12-09T04:09:07.469Z] =================================================================================================================== 00:19:25.519 [2024-12-09T04:09:07.469Z] Total : 8718.93 34.06 0.00 0.00 14649.20 577.16 4026531.84 00:19:25.519 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.777 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.777 rmmod nvme_tcp 00:19:25.777 rmmod nvme_fabrics 00:19:25.777 rmmod nvme_keyring 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76953 ']' 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76953 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76953 ']' 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76953 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76953 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.036 killing process with pid 76953 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76953' 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76953 00:19:26.036 04:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76953 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:26.294 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:26.552 00:19:26.552 real 0m42.676s 00:19:26.552 user 2m17.748s 00:19:26.552 sys 0m12.419s 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:26.552 ************************************ 00:19:26.552 END TEST nvmf_host_multipath_status 00:19:26.552 ************************************ 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.552 ************************************ 00:19:26.552 START TEST nvmf_discovery_remove_ifc 00:19:26.552 ************************************ 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:26.552 * Looking for test storage... 00:19:26.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:19:26.552 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:26.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.812 --rc genhtml_branch_coverage=1 00:19:26.812 --rc genhtml_function_coverage=1 00:19:26.812 --rc genhtml_legend=1 00:19:26.812 --rc geninfo_all_blocks=1 00:19:26.812 --rc geninfo_unexecuted_blocks=1 00:19:26.812 00:19:26.812 ' 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:26.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.812 --rc genhtml_branch_coverage=1 00:19:26.812 --rc genhtml_function_coverage=1 00:19:26.812 --rc genhtml_legend=1 00:19:26.812 --rc geninfo_all_blocks=1 00:19:26.812 --rc geninfo_unexecuted_blocks=1 00:19:26.812 00:19:26.812 ' 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:26.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.812 --rc genhtml_branch_coverage=1 00:19:26.812 --rc genhtml_function_coverage=1 00:19:26.812 --rc genhtml_legend=1 00:19:26.812 --rc geninfo_all_blocks=1 00:19:26.812 --rc geninfo_unexecuted_blocks=1 00:19:26.812 00:19:26.812 ' 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:26.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.812 --rc genhtml_branch_coverage=1 00:19:26.812 --rc genhtml_function_coverage=1 00:19:26.812 --rc genhtml_legend=1 00:19:26.812 --rc geninfo_all_blocks=1 00:19:26.812 --rc geninfo_unexecuted_blocks=1 00:19:26.812 00:19:26.812 ' 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.812 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.813 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:26.813 Cannot find device "nvmf_init_br" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:26.813 Cannot find device "nvmf_init_br2" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:26.813 Cannot find device "nvmf_tgt_br" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.813 Cannot find device "nvmf_tgt_br2" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:26.813 Cannot find device "nvmf_init_br" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:26.813 Cannot find device "nvmf_init_br2" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:26.813 Cannot find device "nvmf_tgt_br" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:26.813 Cannot find device "nvmf_tgt_br2" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:26.813 Cannot find device "nvmf_br" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:26.813 Cannot find device "nvmf_init_if" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:26.813 Cannot find device "nvmf_init_if2" 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:26.813 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:27.073 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.074 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.074 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.074 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:27.074 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:27.074 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:27.074 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.074 04:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:27.074 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:27.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:19:27.074 00:19:27.074 --- 10.0.0.3 ping statistics --- 00:19:27.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.074 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:27.074 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:27.074 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:27.074 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:19:27.074 00:19:27.074 --- 10.0.0.4 ping statistics --- 00:19:27.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.074 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:27.074 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:27.074 00:19:27.074 --- 10.0.0.1 ping statistics --- 00:19:27.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.074 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:27.074 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:27.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:27.333 00:19:27.333 --- 10.0.0.2 ping statistics --- 00:19:27.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.333 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77876 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77876 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77876 ']' 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.333 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.333 [2024-12-09 04:09:09.118372] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:19:27.333 [2024-12-09 04:09:09.118471] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.333 [2024-12-09 04:09:09.264019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.592 [2024-12-09 04:09:09.317007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.592 [2024-12-09 04:09:09.317076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.592 [2024-12-09 04:09:09.317087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.592 [2024-12-09 04:09:09.317095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.592 [2024-12-09 04:09:09.317101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.592 [2024-12-09 04:09:09.317522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.592 [2024-12-09 04:09:09.391921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.592 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.851 [2024-12-09 04:09:09.546897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.851 [2024-12-09 04:09:09.555006] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:27.851 null0 00:19:27.851 [2024-12-09 04:09:09.586863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77906 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77906 /tmp/host.sock 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77906 ']' 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.852 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.852 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.852 [2024-12-09 04:09:09.677294] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:19:27.852 [2024-12-09 04:09:09.677424] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77906 ] 00:19:28.111 [2024-12-09 04:09:09.837956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.111 [2024-12-09 04:09:09.919225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.111 04:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.371 [2024-12-09 04:09:10.070454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.371 04:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.371 04:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:28.371 04:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.371 04:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.307 [2024-12-09 04:09:11.150887] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:29.307 [2024-12-09 04:09:11.150965] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:29.307 [2024-12-09 04:09:11.151002] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:29.307 [2024-12-09 04:09:11.156959] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:29.307 [2024-12-09 04:09:11.211496] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:29.307 [2024-12-09 04:09:11.212913] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1813f00:1 started. 00:19:29.307 [2024-12-09 04:09:11.215261] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:29.307 [2024-12-09 04:09:11.215368] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:29.307 [2024-12-09 04:09:11.215402] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:29.307 [2024-12-09 04:09:11.215426] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:29.307 [2024-12-09 04:09:11.215453] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.307 [2024-12-09 04:09:11.219366] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1813f00 was disconnected and freed. delete nvme_qpair. 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.307 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:29.565 04:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:30.501 04:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:31.878 04:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:32.839 04:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:33.772 04:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.756 [2024-12-09 04:09:16.642060] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:34.756 [2024-12-09 04:09:16.642129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.756 [2024-12-09 04:09:16.642145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.756 [2024-12-09 04:09:16.642158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.756 [2024-12-09 04:09:16.642177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.756 [2024-12-09 04:09:16.642189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.756 [2024-12-09 04:09:16.642198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.756 [2024-12-09 04:09:16.642208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.756 [2024-12-09 04:09:16.642217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.756 [2024-12-09 04:09:16.642227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.756 [2024-12-09 04:09:16.642235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.756 [2024-12-09 04:09:16.642244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17effc0 is same with the state(6) to be set 00:19:34.756 [2024-12-09 04:09:16.652056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17effc0 (9): Bad file descriptor 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:34.756 04:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.756 [2024-12-09 04:09:16.662074] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:34.756 [2024-12-09 04:09:16.662106] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:34.756 [2024-12-09 04:09:16.662113] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:34.756 [2024-12-09 04:09:16.662120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:34.756 [2024-12-09 04:09:16.662184] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.131 [2024-12-09 04:09:17.714235] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:36.131 [2024-12-09 04:09:17.714316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17effc0 with addr=10.0.0.3, port=4420 00:19:36.131 [2024-12-09 04:09:17.714341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17effc0 is same with the state(6) to be set 00:19:36.131 [2024-12-09 04:09:17.714383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17effc0 (9): Bad file descriptor 00:19:36.131 [2024-12-09 04:09:17.714997] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:36.131 [2024-12-09 04:09:17.715051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:36.131 [2024-12-09 04:09:17.715067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:36.131 [2024-12-09 04:09:17.715082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:36.131 [2024-12-09 04:09:17.715095] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:36.131 [2024-12-09 04:09:17.715105] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:36.131 [2024-12-09 04:09:17.715113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:36.131 [2024-12-09 04:09:17.715127] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:36.131 [2024-12-09 04:09:17.715135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:36.131 04:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:37.078 [2024-12-09 04:09:18.715222] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:37.078 [2024-12-09 04:09:18.715273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:37.078 [2024-12-09 04:09:18.715291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:37.078 [2024-12-09 04:09:18.715302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:37.078 [2024-12-09 04:09:18.715311] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:37.078 [2024-12-09 04:09:18.715320] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:37.078 [2024-12-09 04:09:18.715326] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:37.078 [2024-12-09 04:09:18.715331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:37.078 [2024-12-09 04:09:18.715366] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:37.078 [2024-12-09 04:09:18.715402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 04:09:18.715418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 [2024-12-09 04:09:18.715431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 04:09:18.715440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 [2024-12-09 04:09:18.715449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 04:09:18.715457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 [2024-12-09 04:09:18.715466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 04:09:18.715473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 [2024-12-09 04:09:18.715483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 04:09:18.715491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 [2024-12-09 04:09:18.715499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:37.078 [2024-12-09 04:09:18.716169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177ba20 (9): Bad file descriptor 00:19:37.078 [2024-12-09 04:09:18.717184] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:37.078 [2024-12-09 04:09:18.717214] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:37.078 04:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:38.012 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:38.012 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:38.013 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.013 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.013 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:38.013 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:38.013 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:38.013 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.013 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:38.013 04:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:38.959 [2024-12-09 04:09:20.724463] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:38.959 [2024-12-09 04:09:20.724500] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:38.959 [2024-12-09 04:09:20.724536] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:38.959 [2024-12-09 04:09:20.730497] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:38.959 [2024-12-09 04:09:20.784862] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:38.959 [2024-12-09 04:09:20.786249] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x181c1d0:1 started. 00:19:38.959 [2024-12-09 04:09:20.788054] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:38.959 [2024-12-09 04:09:20.788135] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:38.959 [2024-12-09 04:09:20.788169] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:38.959 [2024-12-09 04:09:20.788221] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:38.959 [2024-12-09 04:09:20.788235] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:38.959 [2024-12-09 04:09:20.793012] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x181c1d0 was disconnected and freed. delete nvme_qpair. 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77906 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77906 ']' 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77906 00:19:39.217 04:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:39.217 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.217 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77906 00:19:39.217 killing process with pid 77906 00:19:39.217 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.217 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.217 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77906' 00:19:39.217 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77906 00:19:39.217 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77906 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:39.476 rmmod nvme_tcp 00:19:39.476 rmmod nvme_fabrics 00:19:39.476 rmmod nvme_keyring 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77876 ']' 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77876 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77876 ']' 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77876 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77876 00:19:39.476 killing process with pid 77876 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77876' 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77876 00:19:39.476 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77876 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:39.735 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:39.994 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:39.994 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:39.994 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.994 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:39.994 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:39.995 00:19:39.995 real 0m13.514s 00:19:39.995 user 0m22.643s 00:19:39.995 sys 0m2.733s 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.995 ************************************ 00:19:39.995 04:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.995 END TEST nvmf_discovery_remove_ifc 00:19:39.995 ************************************ 00:19:40.254 04:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:40.254 04:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.254 04:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.254 04:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.254 ************************************ 00:19:40.254 START TEST nvmf_identify_kernel_target 00:19:40.254 ************************************ 00:19:40.254 04:09:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:40.254 * Looking for test storage... 00:19:40.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.254 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.254 --rc genhtml_branch_coverage=1 00:19:40.254 --rc genhtml_function_coverage=1 00:19:40.254 --rc genhtml_legend=1 00:19:40.254 --rc geninfo_all_blocks=1 00:19:40.254 --rc geninfo_unexecuted_blocks=1 00:19:40.255 00:19:40.255 ' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.255 --rc genhtml_branch_coverage=1 00:19:40.255 --rc genhtml_function_coverage=1 00:19:40.255 --rc genhtml_legend=1 00:19:40.255 --rc geninfo_all_blocks=1 00:19:40.255 --rc geninfo_unexecuted_blocks=1 00:19:40.255 00:19:40.255 ' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.255 --rc genhtml_branch_coverage=1 00:19:40.255 --rc genhtml_function_coverage=1 00:19:40.255 --rc genhtml_legend=1 00:19:40.255 --rc geninfo_all_blocks=1 00:19:40.255 --rc geninfo_unexecuted_blocks=1 00:19:40.255 00:19:40.255 ' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.255 --rc genhtml_branch_coverage=1 00:19:40.255 --rc genhtml_function_coverage=1 00:19:40.255 --rc genhtml_legend=1 00:19:40.255 --rc geninfo_all_blocks=1 00:19:40.255 --rc geninfo_unexecuted_blocks=1 00:19:40.255 00:19:40.255 ' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:40.255 Cannot find device "nvmf_init_br" 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:40.255 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:40.255 Cannot find device "nvmf_init_br2" 00:19:40.256 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:40.256 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:40.515 Cannot find device "nvmf_tgt_br" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.515 Cannot find device "nvmf_tgt_br2" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:40.515 Cannot find device "nvmf_init_br" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:40.515 Cannot find device "nvmf_init_br2" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:40.515 Cannot find device "nvmf_tgt_br" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:40.515 Cannot find device "nvmf_tgt_br2" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:40.515 Cannot find device "nvmf_br" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:40.515 Cannot find device "nvmf_init_if" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:40.515 Cannot find device "nvmf_init_if2" 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.515 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.774 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:40.774 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:40.775 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.775 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:19:40.775 00:19:40.775 --- 10.0.0.3 ping statistics --- 00:19:40.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.775 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:40.775 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:40.775 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:19:40.775 00:19:40.775 --- 10.0.0.4 ping statistics --- 00:19:40.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.775 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:40.775 00:19:40.775 --- 10.0.0.1 ping statistics --- 00:19:40.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.775 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:40.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:40.775 00:19:40.775 --- 10.0.0.2 ping statistics --- 00:19:40.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.775 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:40.775 04:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:41.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.033 Waiting for block devices as requested 00:19:41.291 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.291 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:41.291 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:41.549 No valid GPT data, bailing 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:41.550 No valid GPT data, bailing 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:41.550 No valid GPT data, bailing 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:41.550 No valid GPT data, bailing 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:41.550 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc -a 10.0.0.1 -t tcp -s 4420 00:19:41.809 00:19:41.809 Discovery Log Number of Records 2, Generation counter 2 00:19:41.809 =====Discovery Log Entry 0====== 00:19:41.809 trtype: tcp 00:19:41.809 adrfam: ipv4 00:19:41.809 subtype: current discovery subsystem 00:19:41.809 treq: not specified, sq flow control disable supported 00:19:41.809 portid: 1 00:19:41.809 trsvcid: 4420 00:19:41.809 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:41.809 traddr: 10.0.0.1 00:19:41.809 eflags: none 00:19:41.809 sectype: none 00:19:41.809 =====Discovery Log Entry 1====== 00:19:41.809 trtype: tcp 00:19:41.809 adrfam: ipv4 00:19:41.809 subtype: nvme subsystem 00:19:41.809 treq: not specified, sq flow control disable supported 00:19:41.809 portid: 1 00:19:41.809 trsvcid: 4420 00:19:41.809 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:41.809 traddr: 10.0.0.1 00:19:41.809 eflags: none 00:19:41.809 sectype: none 00:19:41.809 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:41.809 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:41.809 ===================================================== 00:19:41.809 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:41.809 ===================================================== 00:19:41.809 Controller Capabilities/Features 00:19:41.809 ================================ 00:19:41.809 Vendor ID: 0000 00:19:41.809 Subsystem Vendor ID: 0000 00:19:41.809 Serial Number: 27d5d6a262e4d58b06c6 00:19:41.809 Model Number: Linux 00:19:41.809 Firmware Version: 6.8.9-20 00:19:41.809 Recommended Arb Burst: 0 00:19:41.809 IEEE OUI Identifier: 00 00 00 00:19:41.809 Multi-path I/O 00:19:41.809 May have multiple subsystem ports: No 00:19:41.809 May have multiple controllers: No 00:19:41.809 Associated with SR-IOV VF: No 00:19:41.809 Max Data Transfer Size: Unlimited 00:19:41.809 Max Number of Namespaces: 0 00:19:41.809 Max Number of I/O Queues: 1024 00:19:41.809 NVMe Specification Version (VS): 1.3 00:19:41.809 NVMe Specification Version (Identify): 1.3 00:19:41.809 Maximum Queue Entries: 1024 00:19:41.809 Contiguous Queues Required: No 00:19:41.809 Arbitration Mechanisms Supported 00:19:41.809 Weighted Round Robin: Not Supported 00:19:41.809 Vendor Specific: Not Supported 00:19:41.809 Reset Timeout: 7500 ms 00:19:41.809 Doorbell Stride: 4 bytes 00:19:41.809 NVM Subsystem Reset: Not Supported 00:19:41.809 Command Sets Supported 00:19:41.809 NVM Command Set: Supported 00:19:41.809 Boot Partition: Not Supported 00:19:41.809 Memory Page Size Minimum: 4096 bytes 00:19:41.809 Memory Page Size Maximum: 4096 bytes 00:19:41.809 Persistent Memory Region: Not Supported 00:19:41.809 Optional Asynchronous Events Supported 00:19:41.809 Namespace Attribute Notices: Not Supported 00:19:41.809 Firmware Activation Notices: Not Supported 00:19:41.809 ANA Change Notices: Not Supported 00:19:41.809 PLE Aggregate Log Change Notices: Not Supported 00:19:41.809 LBA Status Info Alert Notices: Not Supported 00:19:41.809 EGE Aggregate Log Change Notices: Not Supported 00:19:41.809 Normal NVM Subsystem Shutdown event: Not Supported 00:19:41.809 Zone Descriptor Change Notices: Not Supported 00:19:41.809 Discovery Log Change Notices: Supported 00:19:41.809 Controller Attributes 00:19:41.809 128-bit Host Identifier: Not Supported 00:19:41.809 Non-Operational Permissive Mode: Not Supported 00:19:41.809 NVM Sets: Not Supported 00:19:41.809 Read Recovery Levels: Not Supported 00:19:41.809 Endurance Groups: Not Supported 00:19:41.809 Predictable Latency Mode: Not Supported 00:19:41.809 Traffic Based Keep ALive: Not Supported 00:19:41.809 Namespace Granularity: Not Supported 00:19:41.809 SQ Associations: Not Supported 00:19:41.809 UUID List: Not Supported 00:19:41.809 Multi-Domain Subsystem: Not Supported 00:19:41.809 Fixed Capacity Management: Not Supported 00:19:41.809 Variable Capacity Management: Not Supported 00:19:41.809 Delete Endurance Group: Not Supported 00:19:41.809 Delete NVM Set: Not Supported 00:19:41.809 Extended LBA Formats Supported: Not Supported 00:19:41.809 Flexible Data Placement Supported: Not Supported 00:19:41.809 00:19:41.809 Controller Memory Buffer Support 00:19:41.809 ================================ 00:19:41.809 Supported: No 00:19:41.809 00:19:41.809 Persistent Memory Region Support 00:19:41.809 ================================ 00:19:41.809 Supported: No 00:19:41.809 00:19:41.809 Admin Command Set Attributes 00:19:41.809 ============================ 00:19:41.809 Security Send/Receive: Not Supported 00:19:41.809 Format NVM: Not Supported 00:19:41.809 Firmware Activate/Download: Not Supported 00:19:41.809 Namespace Management: Not Supported 00:19:41.809 Device Self-Test: Not Supported 00:19:41.809 Directives: Not Supported 00:19:41.809 NVMe-MI: Not Supported 00:19:41.809 Virtualization Management: Not Supported 00:19:41.809 Doorbell Buffer Config: Not Supported 00:19:41.809 Get LBA Status Capability: Not Supported 00:19:41.809 Command & Feature Lockdown Capability: Not Supported 00:19:41.809 Abort Command Limit: 1 00:19:41.809 Async Event Request Limit: 1 00:19:41.809 Number of Firmware Slots: N/A 00:19:41.809 Firmware Slot 1 Read-Only: N/A 00:19:41.809 Firmware Activation Without Reset: N/A 00:19:41.809 Multiple Update Detection Support: N/A 00:19:41.809 Firmware Update Granularity: No Information Provided 00:19:41.809 Per-Namespace SMART Log: No 00:19:41.809 Asymmetric Namespace Access Log Page: Not Supported 00:19:41.809 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:41.809 Command Effects Log Page: Not Supported 00:19:41.809 Get Log Page Extended Data: Supported 00:19:41.809 Telemetry Log Pages: Not Supported 00:19:41.809 Persistent Event Log Pages: Not Supported 00:19:41.810 Supported Log Pages Log Page: May Support 00:19:41.810 Commands Supported & Effects Log Page: Not Supported 00:19:41.810 Feature Identifiers & Effects Log Page:May Support 00:19:41.810 NVMe-MI Commands & Effects Log Page: May Support 00:19:41.810 Data Area 4 for Telemetry Log: Not Supported 00:19:41.810 Error Log Page Entries Supported: 1 00:19:41.810 Keep Alive: Not Supported 00:19:41.810 00:19:41.810 NVM Command Set Attributes 00:19:41.810 ========================== 00:19:41.810 Submission Queue Entry Size 00:19:41.810 Max: 1 00:19:41.810 Min: 1 00:19:41.810 Completion Queue Entry Size 00:19:41.810 Max: 1 00:19:41.810 Min: 1 00:19:41.810 Number of Namespaces: 0 00:19:41.810 Compare Command: Not Supported 00:19:41.810 Write Uncorrectable Command: Not Supported 00:19:41.810 Dataset Management Command: Not Supported 00:19:41.810 Write Zeroes Command: Not Supported 00:19:41.810 Set Features Save Field: Not Supported 00:19:41.810 Reservations: Not Supported 00:19:41.810 Timestamp: Not Supported 00:19:41.810 Copy: Not Supported 00:19:41.810 Volatile Write Cache: Not Present 00:19:41.810 Atomic Write Unit (Normal): 1 00:19:41.810 Atomic Write Unit (PFail): 1 00:19:41.810 Atomic Compare & Write Unit: 1 00:19:41.810 Fused Compare & Write: Not Supported 00:19:41.810 Scatter-Gather List 00:19:41.810 SGL Command Set: Supported 00:19:41.810 SGL Keyed: Not Supported 00:19:41.810 SGL Bit Bucket Descriptor: Not Supported 00:19:41.810 SGL Metadata Pointer: Not Supported 00:19:41.810 Oversized SGL: Not Supported 00:19:41.810 SGL Metadata Address: Not Supported 00:19:41.810 SGL Offset: Supported 00:19:41.810 Transport SGL Data Block: Not Supported 00:19:41.810 Replay Protected Memory Block: Not Supported 00:19:41.810 00:19:41.810 Firmware Slot Information 00:19:41.810 ========================= 00:19:41.810 Active slot: 0 00:19:41.810 00:19:41.810 00:19:41.810 Error Log 00:19:41.810 ========= 00:19:41.810 00:19:41.810 Active Namespaces 00:19:41.810 ================= 00:19:41.810 Discovery Log Page 00:19:41.810 ================== 00:19:41.810 Generation Counter: 2 00:19:41.810 Number of Records: 2 00:19:41.810 Record Format: 0 00:19:41.810 00:19:41.810 Discovery Log Entry 0 00:19:41.810 ---------------------- 00:19:41.810 Transport Type: 3 (TCP) 00:19:41.810 Address Family: 1 (IPv4) 00:19:41.810 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:41.810 Entry Flags: 00:19:41.810 Duplicate Returned Information: 0 00:19:41.810 Explicit Persistent Connection Support for Discovery: 0 00:19:41.810 Transport Requirements: 00:19:41.810 Secure Channel: Not Specified 00:19:41.810 Port ID: 1 (0x0001) 00:19:41.810 Controller ID: 65535 (0xffff) 00:19:41.810 Admin Max SQ Size: 32 00:19:41.810 Transport Service Identifier: 4420 00:19:41.810 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:41.810 Transport Address: 10.0.0.1 00:19:41.810 Discovery Log Entry 1 00:19:41.810 ---------------------- 00:19:41.810 Transport Type: 3 (TCP) 00:19:41.810 Address Family: 1 (IPv4) 00:19:41.810 Subsystem Type: 2 (NVM Subsystem) 00:19:41.810 Entry Flags: 00:19:41.810 Duplicate Returned Information: 0 00:19:41.810 Explicit Persistent Connection Support for Discovery: 0 00:19:41.810 Transport Requirements: 00:19:41.810 Secure Channel: Not Specified 00:19:41.810 Port ID: 1 (0x0001) 00:19:41.810 Controller ID: 65535 (0xffff) 00:19:41.810 Admin Max SQ Size: 32 00:19:41.810 Transport Service Identifier: 4420 00:19:41.810 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:41.810 Transport Address: 10.0.0.1 00:19:41.810 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:42.069 get_feature(0x01) failed 00:19:42.069 get_feature(0x02) failed 00:19:42.069 get_feature(0x04) failed 00:19:42.069 ===================================================== 00:19:42.069 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:42.069 ===================================================== 00:19:42.069 Controller Capabilities/Features 00:19:42.069 ================================ 00:19:42.069 Vendor ID: 0000 00:19:42.069 Subsystem Vendor ID: 0000 00:19:42.069 Serial Number: d98cf637b76205ab42ad 00:19:42.069 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:42.069 Firmware Version: 6.8.9-20 00:19:42.069 Recommended Arb Burst: 6 00:19:42.069 IEEE OUI Identifier: 00 00 00 00:19:42.069 Multi-path I/O 00:19:42.069 May have multiple subsystem ports: Yes 00:19:42.069 May have multiple controllers: Yes 00:19:42.069 Associated with SR-IOV VF: No 00:19:42.069 Max Data Transfer Size: Unlimited 00:19:42.069 Max Number of Namespaces: 1024 00:19:42.069 Max Number of I/O Queues: 128 00:19:42.069 NVMe Specification Version (VS): 1.3 00:19:42.069 NVMe Specification Version (Identify): 1.3 00:19:42.069 Maximum Queue Entries: 1024 00:19:42.069 Contiguous Queues Required: No 00:19:42.069 Arbitration Mechanisms Supported 00:19:42.069 Weighted Round Robin: Not Supported 00:19:42.069 Vendor Specific: Not Supported 00:19:42.069 Reset Timeout: 7500 ms 00:19:42.069 Doorbell Stride: 4 bytes 00:19:42.069 NVM Subsystem Reset: Not Supported 00:19:42.069 Command Sets Supported 00:19:42.069 NVM Command Set: Supported 00:19:42.069 Boot Partition: Not Supported 00:19:42.069 Memory Page Size Minimum: 4096 bytes 00:19:42.069 Memory Page Size Maximum: 4096 bytes 00:19:42.069 Persistent Memory Region: Not Supported 00:19:42.069 Optional Asynchronous Events Supported 00:19:42.069 Namespace Attribute Notices: Supported 00:19:42.069 Firmware Activation Notices: Not Supported 00:19:42.069 ANA Change Notices: Supported 00:19:42.069 PLE Aggregate Log Change Notices: Not Supported 00:19:42.069 LBA Status Info Alert Notices: Not Supported 00:19:42.069 EGE Aggregate Log Change Notices: Not Supported 00:19:42.069 Normal NVM Subsystem Shutdown event: Not Supported 00:19:42.069 Zone Descriptor Change Notices: Not Supported 00:19:42.070 Discovery Log Change Notices: Not Supported 00:19:42.070 Controller Attributes 00:19:42.070 128-bit Host Identifier: Supported 00:19:42.070 Non-Operational Permissive Mode: Not Supported 00:19:42.070 NVM Sets: Not Supported 00:19:42.070 Read Recovery Levels: Not Supported 00:19:42.070 Endurance Groups: Not Supported 00:19:42.070 Predictable Latency Mode: Not Supported 00:19:42.070 Traffic Based Keep ALive: Supported 00:19:42.070 Namespace Granularity: Not Supported 00:19:42.070 SQ Associations: Not Supported 00:19:42.070 UUID List: Not Supported 00:19:42.070 Multi-Domain Subsystem: Not Supported 00:19:42.070 Fixed Capacity Management: Not Supported 00:19:42.070 Variable Capacity Management: Not Supported 00:19:42.070 Delete Endurance Group: Not Supported 00:19:42.070 Delete NVM Set: Not Supported 00:19:42.070 Extended LBA Formats Supported: Not Supported 00:19:42.070 Flexible Data Placement Supported: Not Supported 00:19:42.070 00:19:42.070 Controller Memory Buffer Support 00:19:42.070 ================================ 00:19:42.070 Supported: No 00:19:42.070 00:19:42.070 Persistent Memory Region Support 00:19:42.070 ================================ 00:19:42.070 Supported: No 00:19:42.070 00:19:42.070 Admin Command Set Attributes 00:19:42.070 ============================ 00:19:42.070 Security Send/Receive: Not Supported 00:19:42.070 Format NVM: Not Supported 00:19:42.070 Firmware Activate/Download: Not Supported 00:19:42.070 Namespace Management: Not Supported 00:19:42.070 Device Self-Test: Not Supported 00:19:42.070 Directives: Not Supported 00:19:42.070 NVMe-MI: Not Supported 00:19:42.070 Virtualization Management: Not Supported 00:19:42.070 Doorbell Buffer Config: Not Supported 00:19:42.070 Get LBA Status Capability: Not Supported 00:19:42.070 Command & Feature Lockdown Capability: Not Supported 00:19:42.070 Abort Command Limit: 4 00:19:42.070 Async Event Request Limit: 4 00:19:42.070 Number of Firmware Slots: N/A 00:19:42.070 Firmware Slot 1 Read-Only: N/A 00:19:42.070 Firmware Activation Without Reset: N/A 00:19:42.070 Multiple Update Detection Support: N/A 00:19:42.070 Firmware Update Granularity: No Information Provided 00:19:42.070 Per-Namespace SMART Log: Yes 00:19:42.070 Asymmetric Namespace Access Log Page: Supported 00:19:42.070 ANA Transition Time : 10 sec 00:19:42.070 00:19:42.070 Asymmetric Namespace Access Capabilities 00:19:42.070 ANA Optimized State : Supported 00:19:42.070 ANA Non-Optimized State : Supported 00:19:42.070 ANA Inaccessible State : Supported 00:19:42.070 ANA Persistent Loss State : Supported 00:19:42.070 ANA Change State : Supported 00:19:42.070 ANAGRPID is not changed : No 00:19:42.070 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:42.070 00:19:42.070 ANA Group Identifier Maximum : 128 00:19:42.070 Number of ANA Group Identifiers : 128 00:19:42.070 Max Number of Allowed Namespaces : 1024 00:19:42.070 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:42.070 Command Effects Log Page: Supported 00:19:42.070 Get Log Page Extended Data: Supported 00:19:42.070 Telemetry Log Pages: Not Supported 00:19:42.070 Persistent Event Log Pages: Not Supported 00:19:42.070 Supported Log Pages Log Page: May Support 00:19:42.070 Commands Supported & Effects Log Page: Not Supported 00:19:42.070 Feature Identifiers & Effects Log Page:May Support 00:19:42.070 NVMe-MI Commands & Effects Log Page: May Support 00:19:42.070 Data Area 4 for Telemetry Log: Not Supported 00:19:42.070 Error Log Page Entries Supported: 128 00:19:42.070 Keep Alive: Supported 00:19:42.070 Keep Alive Granularity: 1000 ms 00:19:42.070 00:19:42.070 NVM Command Set Attributes 00:19:42.070 ========================== 00:19:42.070 Submission Queue Entry Size 00:19:42.070 Max: 64 00:19:42.070 Min: 64 00:19:42.070 Completion Queue Entry Size 00:19:42.070 Max: 16 00:19:42.070 Min: 16 00:19:42.070 Number of Namespaces: 1024 00:19:42.070 Compare Command: Not Supported 00:19:42.070 Write Uncorrectable Command: Not Supported 00:19:42.070 Dataset Management Command: Supported 00:19:42.070 Write Zeroes Command: Supported 00:19:42.070 Set Features Save Field: Not Supported 00:19:42.070 Reservations: Not Supported 00:19:42.070 Timestamp: Not Supported 00:19:42.070 Copy: Not Supported 00:19:42.070 Volatile Write Cache: Present 00:19:42.070 Atomic Write Unit (Normal): 1 00:19:42.070 Atomic Write Unit (PFail): 1 00:19:42.070 Atomic Compare & Write Unit: 1 00:19:42.070 Fused Compare & Write: Not Supported 00:19:42.070 Scatter-Gather List 00:19:42.070 SGL Command Set: Supported 00:19:42.070 SGL Keyed: Not Supported 00:19:42.070 SGL Bit Bucket Descriptor: Not Supported 00:19:42.070 SGL Metadata Pointer: Not Supported 00:19:42.070 Oversized SGL: Not Supported 00:19:42.070 SGL Metadata Address: Not Supported 00:19:42.070 SGL Offset: Supported 00:19:42.070 Transport SGL Data Block: Not Supported 00:19:42.070 Replay Protected Memory Block: Not Supported 00:19:42.070 00:19:42.070 Firmware Slot Information 00:19:42.070 ========================= 00:19:42.070 Active slot: 0 00:19:42.070 00:19:42.070 Asymmetric Namespace Access 00:19:42.070 =========================== 00:19:42.070 Change Count : 0 00:19:42.070 Number of ANA Group Descriptors : 1 00:19:42.070 ANA Group Descriptor : 0 00:19:42.070 ANA Group ID : 1 00:19:42.070 Number of NSID Values : 1 00:19:42.070 Change Count : 0 00:19:42.070 ANA State : 1 00:19:42.070 Namespace Identifier : 1 00:19:42.070 00:19:42.070 Commands Supported and Effects 00:19:42.070 ============================== 00:19:42.070 Admin Commands 00:19:42.070 -------------- 00:19:42.070 Get Log Page (02h): Supported 00:19:42.070 Identify (06h): Supported 00:19:42.070 Abort (08h): Supported 00:19:42.070 Set Features (09h): Supported 00:19:42.070 Get Features (0Ah): Supported 00:19:42.070 Asynchronous Event Request (0Ch): Supported 00:19:42.070 Keep Alive (18h): Supported 00:19:42.070 I/O Commands 00:19:42.070 ------------ 00:19:42.070 Flush (00h): Supported 00:19:42.070 Write (01h): Supported LBA-Change 00:19:42.070 Read (02h): Supported 00:19:42.070 Write Zeroes (08h): Supported LBA-Change 00:19:42.070 Dataset Management (09h): Supported 00:19:42.070 00:19:42.070 Error Log 00:19:42.070 ========= 00:19:42.070 Entry: 0 00:19:42.070 Error Count: 0x3 00:19:42.070 Submission Queue Id: 0x0 00:19:42.070 Command Id: 0x5 00:19:42.070 Phase Bit: 0 00:19:42.070 Status Code: 0x2 00:19:42.070 Status Code Type: 0x0 00:19:42.070 Do Not Retry: 1 00:19:42.070 Error Location: 0x28 00:19:42.070 LBA: 0x0 00:19:42.070 Namespace: 0x0 00:19:42.070 Vendor Log Page: 0x0 00:19:42.070 ----------- 00:19:42.070 Entry: 1 00:19:42.070 Error Count: 0x2 00:19:42.070 Submission Queue Id: 0x0 00:19:42.070 Command Id: 0x5 00:19:42.070 Phase Bit: 0 00:19:42.070 Status Code: 0x2 00:19:42.070 Status Code Type: 0x0 00:19:42.070 Do Not Retry: 1 00:19:42.070 Error Location: 0x28 00:19:42.070 LBA: 0x0 00:19:42.070 Namespace: 0x0 00:19:42.070 Vendor Log Page: 0x0 00:19:42.070 ----------- 00:19:42.070 Entry: 2 00:19:42.070 Error Count: 0x1 00:19:42.070 Submission Queue Id: 0x0 00:19:42.070 Command Id: 0x4 00:19:42.070 Phase Bit: 0 00:19:42.070 Status Code: 0x2 00:19:42.070 Status Code Type: 0x0 00:19:42.070 Do Not Retry: 1 00:19:42.070 Error Location: 0x28 00:19:42.070 LBA: 0x0 00:19:42.070 Namespace: 0x0 00:19:42.070 Vendor Log Page: 0x0 00:19:42.070 00:19:42.070 Number of Queues 00:19:42.070 ================ 00:19:42.070 Number of I/O Submission Queues: 128 00:19:42.070 Number of I/O Completion Queues: 128 00:19:42.070 00:19:42.070 ZNS Specific Controller Data 00:19:42.070 ============================ 00:19:42.070 Zone Append Size Limit: 0 00:19:42.070 00:19:42.070 00:19:42.070 Active Namespaces 00:19:42.070 ================= 00:19:42.070 get_feature(0x05) failed 00:19:42.070 Namespace ID:1 00:19:42.070 Command Set Identifier: NVM (00h) 00:19:42.070 Deallocate: Supported 00:19:42.070 Deallocated/Unwritten Error: Not Supported 00:19:42.070 Deallocated Read Value: Unknown 00:19:42.070 Deallocate in Write Zeroes: Not Supported 00:19:42.070 Deallocated Guard Field: 0xFFFF 00:19:42.070 Flush: Supported 00:19:42.070 Reservation: Not Supported 00:19:42.070 Namespace Sharing Capabilities: Multiple Controllers 00:19:42.070 Size (in LBAs): 1310720 (5GiB) 00:19:42.071 Capacity (in LBAs): 1310720 (5GiB) 00:19:42.071 Utilization (in LBAs): 1310720 (5GiB) 00:19:42.071 UUID: 27267e07-a2c5-43d4-acce-2a3f8f7b0521 00:19:42.071 Thin Provisioning: Not Supported 00:19:42.071 Per-NS Atomic Units: Yes 00:19:42.071 Atomic Boundary Size (Normal): 0 00:19:42.071 Atomic Boundary Size (PFail): 0 00:19:42.071 Atomic Boundary Offset: 0 00:19:42.071 NGUID/EUI64 Never Reused: No 00:19:42.071 ANA group ID: 1 00:19:42.071 Namespace Write Protected: No 00:19:42.071 Number of LBA Formats: 1 00:19:42.071 Current LBA Format: LBA Format #00 00:19:42.071 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:42.071 00:19:42.071 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:42.071 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.071 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:42.071 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.071 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:42.071 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.071 04:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.071 rmmod nvme_tcp 00:19:42.071 rmmod nvme_fabrics 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.329 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:42.587 04:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:43.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:43.153 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:43.423 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:43.423 00:19:43.423 real 0m3.267s 00:19:43.423 user 0m1.110s 00:19:43.423 sys 0m1.499s 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.423 ************************************ 00:19:43.423 END TEST nvmf_identify_kernel_target 00:19:43.423 ************************************ 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.423 ************************************ 00:19:43.423 START TEST nvmf_auth_host 00:19:43.423 ************************************ 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:43.423 * Looking for test storage... 00:19:43.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:43.423 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:43.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.681 --rc genhtml_branch_coverage=1 00:19:43.681 --rc genhtml_function_coverage=1 00:19:43.681 --rc genhtml_legend=1 00:19:43.681 --rc geninfo_all_blocks=1 00:19:43.681 --rc geninfo_unexecuted_blocks=1 00:19:43.681 00:19:43.681 ' 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:43.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.681 --rc genhtml_branch_coverage=1 00:19:43.681 --rc genhtml_function_coverage=1 00:19:43.681 --rc genhtml_legend=1 00:19:43.681 --rc geninfo_all_blocks=1 00:19:43.681 --rc geninfo_unexecuted_blocks=1 00:19:43.681 00:19:43.681 ' 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:43.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.681 --rc genhtml_branch_coverage=1 00:19:43.681 --rc genhtml_function_coverage=1 00:19:43.681 --rc genhtml_legend=1 00:19:43.681 --rc geninfo_all_blocks=1 00:19:43.681 --rc geninfo_unexecuted_blocks=1 00:19:43.681 00:19:43.681 ' 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:43.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.681 --rc genhtml_branch_coverage=1 00:19:43.681 --rc genhtml_function_coverage=1 00:19:43.681 --rc genhtml_legend=1 00:19:43.681 --rc geninfo_all_blocks=1 00:19:43.681 --rc geninfo_unexecuted_blocks=1 00:19:43.681 00:19:43.681 ' 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.681 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.682 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:43.682 Cannot find device "nvmf_init_br" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:43.682 Cannot find device "nvmf_init_br2" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:43.682 Cannot find device "nvmf_tgt_br" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.682 Cannot find device "nvmf_tgt_br2" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:43.682 Cannot find device "nvmf_init_br" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:43.682 Cannot find device "nvmf_init_br2" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:43.682 Cannot find device "nvmf_tgt_br" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:43.682 Cannot find device "nvmf_tgt_br2" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:43.682 Cannot find device "nvmf_br" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:43.682 Cannot find device "nvmf_init_if" 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:43.682 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:43.940 Cannot find device "nvmf_init_if2" 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.940 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:43.941 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:43.941 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:43.941 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.941 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:44.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:44.199 00:19:44.199 --- 10.0.0.3 ping statistics --- 00:19:44.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.199 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:44.199 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:44.199 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:19:44.199 00:19:44.199 --- 10.0.0.4 ping statistics --- 00:19:44.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.199 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:44.199 00:19:44.199 --- 10.0.0.1 ping statistics --- 00:19:44.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.199 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:44.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:19:44.199 00:19:44.199 --- 10.0.0.2 ping statistics --- 00:19:44.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.199 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78889 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78889 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78889 ']' 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.199 04:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.132 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.132 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:45.132 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:45.132 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.132 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4b7a6bffd03070eaae060dc1f592610c 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pP7 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4b7a6bffd03070eaae060dc1f592610c 0 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4b7a6bffd03070eaae060dc1f592610c 0 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4b7a6bffd03070eaae060dc1f592610c 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pP7 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pP7 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pP7 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5289c34bc0b362646d6c0d0386a1193d7285f2a44df55ab1e05d66729ee4640b 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.fN9 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5289c34bc0b362646d6c0d0386a1193d7285f2a44df55ab1e05d66729ee4640b 3 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5289c34bc0b362646d6c0d0386a1193d7285f2a44df55ab1e05d66729ee4640b 3 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5289c34bc0b362646d6c0d0386a1193d7285f2a44df55ab1e05d66729ee4640b 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.fN9 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.fN9 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.fN9 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:45.391 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1963d2aef94011c4886e97a240279d538f0429606d9e8ad8 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mYz 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1963d2aef94011c4886e97a240279d538f0429606d9e8ad8 0 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1963d2aef94011c4886e97a240279d538f0429606d9e8ad8 0 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1963d2aef94011c4886e97a240279d538f0429606d9e8ad8 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mYz 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mYz 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.mYz 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b1dd561f00e6509cb0d1b4e1fc54a286031c30fe02859a9 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RfB 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b1dd561f00e6509cb0d1b4e1fc54a286031c30fe02859a9 2 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b1dd561f00e6509cb0d1b4e1fc54a286031c30fe02859a9 2 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b1dd561f00e6509cb0d1b4e1fc54a286031c30fe02859a9 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:45.392 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RfB 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RfB 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.RfB 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a8393e07da6c3571cb6517badf930999 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ow6 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a8393e07da6c3571cb6517badf930999 1 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a8393e07da6c3571cb6517badf930999 1 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a8393e07da6c3571cb6517badf930999 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ow6 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ow6 00:19:45.654 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ow6 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=877121f1f228496e63b56fd9fbfc01cd 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.SDs 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 877121f1f228496e63b56fd9fbfc01cd 1 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 877121f1f228496e63b56fd9fbfc01cd 1 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=877121f1f228496e63b56fd9fbfc01cd 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.SDs 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.SDs 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.SDs 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f59227a17ec6b6b1fdef37b81d9b60b1fdd6285bfffe092 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zX9 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f59227a17ec6b6b1fdef37b81d9b60b1fdd6285bfffe092 2 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f59227a17ec6b6b1fdef37b81d9b60b1fdd6285bfffe092 2 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f59227a17ec6b6b1fdef37b81d9b60b1fdd6285bfffe092 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zX9 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zX9 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zX9 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b01f54ba0a4751181e177dc5e23c8a96 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.btX 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b01f54ba0a4751181e177dc5e23c8a96 0 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b01f54ba0a4751181e177dc5e23c8a96 0 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b01f54ba0a4751181e177dc5e23c8a96 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.655 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.btX 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.btX 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.btX 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b9adb1c2aa713406b2e9a9258fe46e7d43ddaabd3a4100bb856a9b9aeda84ad 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cAQ 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b9adb1c2aa713406b2e9a9258fe46e7d43ddaabd3a4100bb856a9b9aeda84ad 3 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b9adb1c2aa713406b2e9a9258fe46e7d43ddaabd3a4100bb856a9b9aeda84ad 3 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b9adb1c2aa713406b2e9a9258fe46e7d43ddaabd3a4100bb856a9b9aeda84ad 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cAQ 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cAQ 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cAQ 00:19:45.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78889 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78889 ']' 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.925 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pP7 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.fN9 ]] 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fN9 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.mYz 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.184 04:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.RfB ]] 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RfB 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ow6 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.SDs ]] 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SDs 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zX9 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.btX ]] 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.btX 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.184 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cAQ 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:46.185 04:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:46.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.752 Waiting for block devices as requested 00:19:46.752 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:46.752 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:47.320 No valid GPT data, bailing 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:47.320 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:47.578 No valid GPT data, bailing 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:47.579 No valid GPT data, bailing 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:47.579 No valid GPT data, bailing 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc -a 10.0.0.1 -t tcp -s 4420 00:19:47.579 00:19:47.579 Discovery Log Number of Records 2, Generation counter 2 00:19:47.579 =====Discovery Log Entry 0====== 00:19:47.579 trtype: tcp 00:19:47.579 adrfam: ipv4 00:19:47.579 subtype: current discovery subsystem 00:19:47.579 treq: not specified, sq flow control disable supported 00:19:47.579 portid: 1 00:19:47.579 trsvcid: 4420 00:19:47.579 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:47.579 traddr: 10.0.0.1 00:19:47.579 eflags: none 00:19:47.579 sectype: none 00:19:47.579 =====Discovery Log Entry 1====== 00:19:47.579 trtype: tcp 00:19:47.579 adrfam: ipv4 00:19:47.579 subtype: nvme subsystem 00:19:47.579 treq: not specified, sq flow control disable supported 00:19:47.579 portid: 1 00:19:47.579 trsvcid: 4420 00:19:47.579 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:47.579 traddr: 10.0.0.1 00:19:47.579 eflags: none 00:19:47.579 sectype: none 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:47.579 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.838 nvme0n1 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.838 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.097 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.098 nvme0n1 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.098 04:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.098 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.099 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.099 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.099 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.364 nvme0n1 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.364 nvme0n1 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.364 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.624 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.625 nvme0n1 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.625 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.884 nvme0n1 00:19:48.884 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.884 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.884 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.884 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.884 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.884 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.885 04:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.144 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.145 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.404 nvme0n1 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.404 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.405 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.664 nvme0n1 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.664 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.665 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.924 nvme0n1 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:19:49.924 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.925 nvme0n1 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.925 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.185 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.186 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.186 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.186 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.186 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.186 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:50.186 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.186 04:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.186 nvme0n1 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.186 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.756 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.015 nvme0n1 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.015 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.016 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.016 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.016 04:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.274 nvme0n1 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.274 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.533 nvme0n1 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.533 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 nvme0n1 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.791 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.048 nvme0n1 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:52.048 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.306 04:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.760 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.018 nvme0n1 00:19:54.018 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.018 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.019 04:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.278 nvme0n1 00:19:54.278 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.278 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.278 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.278 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.278 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.278 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.536 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.795 nvme0n1 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.795 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.054 nvme0n1 00:19:55.054 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.054 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.054 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.054 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.054 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.055 04:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.313 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.573 nvme0n1 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.573 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.142 nvme0n1 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.142 04:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.142 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 nvme0n1 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.711 04:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.280 nvme0n1 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.280 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.539 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.108 nvme0n1 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.108 04:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.677 nvme0n1 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.677 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 nvme0n1 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.678 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.937 nvme0n1 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.937 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.938 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.197 nvme0n1 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.197 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.198 04:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.198 nvme0n1 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.198 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.458 nvme0n1 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.458 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.718 nvme0n1 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.718 nvme0n1 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.718 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.978 nvme0n1 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.978 04:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.238 nvme0n1 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.238 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.498 nvme0n1 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.498 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.499 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.756 nvme0n1 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:00.756 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.757 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.016 nvme0n1 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.016 04:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.274 nvme0n1 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.274 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.533 nvme0n1 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.533 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.792 nvme0n1 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.792 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.056 nvme0n1 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.056 04:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.694 nvme0n1 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.694 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.953 nvme0n1 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.953 04:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.218 nvme0n1 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:03.218 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:03.219 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.219 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.219 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:03.219 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:03.219 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.219 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.219 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.485 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.745 nvme0n1 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.745 04:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 nvme0n1 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.312 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.896 nvme0n1 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.896 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.897 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.897 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.897 04:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.465 nvme0n1 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.465 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.034 nvme0n1 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.034 04:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.971 nvme0n1 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.971 nvme0n1 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.971 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.972 nvme0n1 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.972 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.231 04:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.231 nvme0n1 00:20:07.231 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.231 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.231 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.232 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.491 nvme0n1 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.491 nvme0n1 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.491 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:07.750 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.751 nvme0n1 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.751 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.011 nvme0n1 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.011 04:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.270 nvme0n1 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.270 nvme0n1 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.270 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.530 nvme0n1 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.530 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.789 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.790 nvme0n1 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.790 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 nvme0n1 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.049 04:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.308 nvme0n1 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.308 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.567 nvme0n1 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.567 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.826 nvme0n1 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:09.826 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.827 04:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 nvme0n1 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:10.394 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.395 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.653 nvme0n1 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:10.653 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.654 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.912 nvme0n1 00:20:10.912 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.912 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.912 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.912 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.912 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.912 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.171 04:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.446 nvme0n1 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.446 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 nvme0n1 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGI3YTZiZmZkMDMwNzBlYWFlMDYwZGMxZjU5MjYxMGOW9SEf: 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTI4OWMzNGJjMGIzNjI2NDZkNmMwZDAzODZhMTE5M2Q3Mjg1ZjJhNDRkZjU1YWIxZTA1ZDY2NzI5ZWU0NjQwYgxkeDM=: 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.732 04:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.326 nvme0n1 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.326 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.892 nvme0n1 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.892 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.150 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.151 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.151 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.151 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.151 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.151 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.151 04:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.718 nvme0n1 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2Y1OTIyN2ExN2VjNmI2YjFmZGVmMzdiODFkOWI2MGIxZmRkNjI4NWJmZmZlMDkymn87KQ==: 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjAxZjU0YmEwYTQ3NTExODFlMTc3ZGM1ZTIzYzhhOTaDcNAd: 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.718 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.284 nvme0n1 00:20:14.284 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.284 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.284 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.284 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.284 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.284 04:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWI5YWRiMWMyYWE3MTM0MDZiMmU5YTkyNThmZTQ2ZTdkNDNkZGFhYmQzYTQxMDBiYjg1NmE5YjlhZWRhODRhZHZg7cg=: 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.284 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.852 nvme0n1 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.852 request: 00:20:14.852 { 00:20:14.852 "name": "nvme0", 00:20:14.852 "trtype": "tcp", 00:20:14.852 "traddr": "10.0.0.1", 00:20:14.852 "adrfam": "ipv4", 00:20:14.852 "trsvcid": "4420", 00:20:14.852 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:14.852 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:14.852 "prchk_reftag": false, 00:20:14.852 "prchk_guard": false, 00:20:14.852 "hdgst": false, 00:20:14.852 "ddgst": false, 00:20:14.852 "allow_unrecognized_csi": false, 00:20:14.852 "method": "bdev_nvme_attach_controller", 00:20:14.852 "req_id": 1 00:20:14.852 } 00:20:14.852 Got JSON-RPC error response 00:20:14.852 response: 00:20:14.852 { 00:20:14.852 "code": -5, 00:20:14.852 "message": "Input/output error" 00:20:14.852 } 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:14.852 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.853 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.853 request: 00:20:14.853 { 00:20:14.853 "name": "nvme0", 00:20:14.853 "trtype": "tcp", 00:20:14.853 "traddr": "10.0.0.1", 00:20:14.853 "adrfam": "ipv4", 00:20:14.853 "trsvcid": "4420", 00:20:14.853 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:14.853 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:14.853 "prchk_reftag": false, 00:20:14.853 "prchk_guard": false, 00:20:14.853 "hdgst": false, 00:20:14.853 "ddgst": false, 00:20:14.853 "dhchap_key": "key2", 00:20:14.853 "allow_unrecognized_csi": false, 00:20:14.853 "method": "bdev_nvme_attach_controller", 00:20:14.853 "req_id": 1 00:20:14.853 } 00:20:14.853 Got JSON-RPC error response 00:20:14.853 response: 00:20:14.853 { 00:20:14.853 "code": -5, 00:20:14.853 "message": "Input/output error" 00:20:14.853 } 00:20:14.853 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:14.853 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.113 request: 00:20:15.113 { 00:20:15.113 "name": "nvme0", 00:20:15.113 "trtype": "tcp", 00:20:15.113 "traddr": "10.0.0.1", 00:20:15.113 "adrfam": "ipv4", 00:20:15.113 "trsvcid": "4420", 00:20:15.113 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:15.113 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:15.113 "prchk_reftag": false, 00:20:15.113 "prchk_guard": false, 00:20:15.113 "hdgst": false, 00:20:15.113 "ddgst": false, 00:20:15.113 "dhchap_key": "key1", 00:20:15.113 "dhchap_ctrlr_key": "ckey2", 00:20:15.113 "allow_unrecognized_csi": false, 00:20:15.113 "method": "bdev_nvme_attach_controller", 00:20:15.113 "req_id": 1 00:20:15.113 } 00:20:15.113 Got JSON-RPC error response 00:20:15.113 response: 00:20:15.113 { 00:20:15.113 "code": -5, 00:20:15.113 "message": "Input/output error" 00:20:15.113 } 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.113 04:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.113 nvme0n1 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.113 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.373 request: 00:20:15.373 { 00:20:15.373 "name": "nvme0", 00:20:15.373 "dhchap_key": "key1", 00:20:15.373 "dhchap_ctrlr_key": "ckey2", 00:20:15.373 "method": "bdev_nvme_set_keys", 00:20:15.373 "req_id": 1 00:20:15.373 } 00:20:15.373 Got JSON-RPC error response 00:20:15.373 response: 00:20:15.373 { 00:20:15.373 "code": -13, 00:20:15.373 "message": "Permission denied" 00:20:15.373 } 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:15.373 04:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:16.309 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2M2QyYWVmOTQwMTFjNDg4NmU5N2EyNDAyNzlkNTM4ZjA0Mjk2MDZkOWU4YWQ4H3Porw==: 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: ]] 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmIxZGQ1NjFmMDBlNjUwOWNiMGQxYjRlMWZjNTRhMjg2MDMxYzMwZmUwMjg1OWE5GRem+A==: 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.310 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.569 nvme0n1 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgzOTNlMDdkYTZjMzU3MWNiNjUxN2JhZGY5MzA5OTlL58II: 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: ]] 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODc3MTIxZjFmMjI4NDk2ZTYzYjU2ZmQ5ZmJmYzAxY2TA8mKP: 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.569 request: 00:20:16.569 { 00:20:16.569 "name": "nvme0", 00:20:16.569 "dhchap_key": "key2", 00:20:16.569 "dhchap_ctrlr_key": "ckey1", 00:20:16.569 "method": "bdev_nvme_set_keys", 00:20:16.569 "req_id": 1 00:20:16.569 } 00:20:16.569 Got JSON-RPC error response 00:20:16.569 response: 00:20:16.569 { 00:20:16.569 "code": -13, 00:20:16.569 "message": "Permission denied" 00:20:16.569 } 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:16.569 04:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:17.506 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.506 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:17.506 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.506 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.506 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:17.764 rmmod nvme_tcp 00:20:17.764 rmmod nvme_fabrics 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78889 ']' 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78889 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78889 ']' 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78889 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78889 00:20:17.764 killing process with pid 78889 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78889' 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78889 00:20:17.764 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78889 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:18.023 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:18.282 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.282 04:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:18.282 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:18.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:19.107 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:19.107 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:19.107 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pP7 /tmp/spdk.key-null.mYz /tmp/spdk.key-sha256.ow6 /tmp/spdk.key-sha384.zX9 /tmp/spdk.key-sha512.cAQ /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:19.107 04:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:19.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:19.674 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:19.674 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:19.674 00:20:19.674 real 0m36.103s 00:20:19.674 user 0m33.459s 00:20:19.674 sys 0m3.923s 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.674 ************************************ 00:20:19.674 END TEST nvmf_auth_host 00:20:19.674 ************************************ 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.674 ************************************ 00:20:19.674 START TEST nvmf_digest 00:20:19.674 ************************************ 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:19.674 * Looking for test storage... 00:20:19.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.674 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:19.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.933 --rc genhtml_branch_coverage=1 00:20:19.933 --rc genhtml_function_coverage=1 00:20:19.933 --rc genhtml_legend=1 00:20:19.933 --rc geninfo_all_blocks=1 00:20:19.933 --rc geninfo_unexecuted_blocks=1 00:20:19.933 00:20:19.933 ' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:19.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.933 --rc genhtml_branch_coverage=1 00:20:19.933 --rc genhtml_function_coverage=1 00:20:19.933 --rc genhtml_legend=1 00:20:19.933 --rc geninfo_all_blocks=1 00:20:19.933 --rc geninfo_unexecuted_blocks=1 00:20:19.933 00:20:19.933 ' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:19.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.933 --rc genhtml_branch_coverage=1 00:20:19.933 --rc genhtml_function_coverage=1 00:20:19.933 --rc genhtml_legend=1 00:20:19.933 --rc geninfo_all_blocks=1 00:20:19.933 --rc geninfo_unexecuted_blocks=1 00:20:19.933 00:20:19.933 ' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:19.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.933 --rc genhtml_branch_coverage=1 00:20:19.933 --rc genhtml_function_coverage=1 00:20:19.933 --rc genhtml_legend=1 00:20:19.933 --rc geninfo_all_blocks=1 00:20:19.933 --rc geninfo_unexecuted_blocks=1 00:20:19.933 00:20:19.933 ' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:19.933 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:19.933 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:19.934 Cannot find device "nvmf_init_br" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:19.934 Cannot find device "nvmf_init_br2" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:19.934 Cannot find device "nvmf_tgt_br" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.934 Cannot find device "nvmf_tgt_br2" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:19.934 Cannot find device "nvmf_init_br" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:19.934 Cannot find device "nvmf_init_br2" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:19.934 Cannot find device "nvmf_tgt_br" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:19.934 Cannot find device "nvmf_tgt_br2" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:19.934 Cannot find device "nvmf_br" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:19.934 Cannot find device "nvmf_init_if" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:19.934 Cannot find device "nvmf_init_if2" 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.934 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:20.192 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:20.192 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:20.192 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:20.193 04:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:20.193 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:20.193 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:20:20.193 00:20:20.193 --- 10.0.0.3 ping statistics --- 00:20:20.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.193 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:20.193 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:20.193 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:20:20.193 00:20:20.193 --- 10.0.0.4 ping statistics --- 00:20:20.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.193 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:20.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:20.193 00:20:20.193 --- 10.0.0.1 ping statistics --- 00:20:20.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.193 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:20.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:20:20.193 00:20:20.193 --- 10.0.0.2 ping statistics --- 00:20:20.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.193 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:20.193 ************************************ 00:20:20.193 START TEST nvmf_digest_clean 00:20:20.193 ************************************ 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80529 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80529 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80529 ']' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.193 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.452 [2024-12-09 04:10:02.159440] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:20.452 [2024-12-09 04:10:02.159549] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.452 [2024-12-09 04:10:02.311919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.452 [2024-12-09 04:10:02.380555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.452 [2024-12-09 04:10:02.380622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.452 [2024-12-09 04:10:02.380637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.452 [2024-12-09 04:10:02.380648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.452 [2024-12-09 04:10:02.380657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.452 [2024-12-09 04:10:02.381150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.710 [2024-12-09 04:10:02.545718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:20.710 null0 00:20:20.710 [2024-12-09 04:10:02.613310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.710 [2024-12-09 04:10:02.637391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80553 00:20:20.710 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80553 /var/tmp/bperf.sock 00:20:20.711 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:20.711 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80553 ']' 00:20:20.711 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:20.711 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:20.711 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:20.711 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.711 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:20.968 [2024-12-09 04:10:02.705480] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:20.968 [2024-12-09 04:10:02.705588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80553 ] 00:20:20.968 [2024-12-09 04:10:02.857154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.226 [2024-12-09 04:10:02.923906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.226 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.226 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:21.226 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:21.226 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:21.226 04:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:21.487 [2024-12-09 04:10:03.238790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.487 04:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:21.487 04:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:21.811 nvme0n1 00:20:21.811 04:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:21.811 04:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:22.084 Running I/O for 2 seconds... 00:20:23.956 16894.00 IOPS, 65.99 MiB/s [2024-12-09T04:10:05.906Z] 16644.50 IOPS, 65.02 MiB/s 00:20:23.956 Latency(us) 00:20:23.956 [2024-12-09T04:10:05.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.956 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:23.956 nvme0n1 : 2.01 16659.02 65.07 0.00 0.00 7678.03 1809.69 21567.30 00:20:23.956 [2024-12-09T04:10:05.906Z] =================================================================================================================== 00:20:23.956 [2024-12-09T04:10:05.906Z] Total : 16659.02 65.07 0.00 0.00 7678.03 1809.69 21567.30 00:20:23.956 { 00:20:23.956 "results": [ 00:20:23.956 { 00:20:23.956 "job": "nvme0n1", 00:20:23.956 "core_mask": "0x2", 00:20:23.956 "workload": "randread", 00:20:23.956 "status": "finished", 00:20:23.956 "queue_depth": 128, 00:20:23.956 "io_size": 4096, 00:20:23.956 "runtime": 2.00594, 00:20:23.956 "iops": 16659.02270257336, 00:20:23.956 "mibps": 65.07430743192718, 00:20:23.956 "io_failed": 0, 00:20:23.956 "io_timeout": 0, 00:20:23.956 "avg_latency_us": 7678.034552473292, 00:20:23.956 "min_latency_us": 1809.6872727272728, 00:20:23.956 "max_latency_us": 21567.30181818182 00:20:23.956 } 00:20:23.956 ], 00:20:23.956 "core_count": 1 00:20:23.956 } 00:20:23.956 04:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:23.956 04:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:23.956 04:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:23.956 04:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:23.956 | select(.opcode=="crc32c") 00:20:23.956 | "\(.module_name) \(.executed)"' 00:20:23.956 04:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80553 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80553 ']' 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80553 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.215 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80553 00:20:24.216 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:24.216 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:24.216 killing process with pid 80553 00:20:24.216 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80553' 00:20:24.216 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80553 00:20:24.216 Received shutdown signal, test time was about 2.000000 seconds 00:20:24.216 00:20:24.216 Latency(us) 00:20:24.216 [2024-12-09T04:10:06.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.216 [2024-12-09T04:10:06.166Z] =================================================================================================================== 00:20:24.216 [2024-12-09T04:10:06.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.216 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80553 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80606 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80606 /var/tmp/bperf.sock 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80606 ']' 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.475 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:24.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:24.734 Zero copy mechanism will not be used. 00:20:24.734 [2024-12-09 04:10:06.431553] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:24.734 [2024-12-09 04:10:06.431648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80606 ] 00:20:24.734 [2024-12-09 04:10:06.575062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.734 [2024-12-09 04:10:06.631334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.734 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.734 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:24.734 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:24.734 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:24.734 04:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:25.302 [2024-12-09 04:10:06.967729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:25.302 04:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:25.302 04:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:25.560 nvme0n1 00:20:25.560 04:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:25.560 04:10:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:25.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:25.560 Zero copy mechanism will not be used. 00:20:25.560 Running I/O for 2 seconds... 00:20:27.871 7152.00 IOPS, 894.00 MiB/s [2024-12-09T04:10:09.821Z] 7200.00 IOPS, 900.00 MiB/s 00:20:27.871 Latency(us) 00:20:27.871 [2024-12-09T04:10:09.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.871 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:27.871 nvme0n1 : 2.00 7198.14 899.77 0.00 0.00 2220.07 1921.40 10485.76 00:20:27.871 [2024-12-09T04:10:09.821Z] =================================================================================================================== 00:20:27.871 [2024-12-09T04:10:09.821Z] Total : 7198.14 899.77 0.00 0.00 2220.07 1921.40 10485.76 00:20:27.871 { 00:20:27.871 "results": [ 00:20:27.871 { 00:20:27.871 "job": "nvme0n1", 00:20:27.871 "core_mask": "0x2", 00:20:27.871 "workload": "randread", 00:20:27.871 "status": "finished", 00:20:27.871 "queue_depth": 16, 00:20:27.871 "io_size": 131072, 00:20:27.871 "runtime": 2.00274, 00:20:27.871 "iops": 7198.138550186245, 00:20:27.871 "mibps": 899.7673187732806, 00:20:27.871 "io_failed": 0, 00:20:27.871 "io_timeout": 0, 00:20:27.871 "avg_latency_us": 2220.070434870346, 00:20:27.871 "min_latency_us": 1921.3963636363637, 00:20:27.871 "max_latency_us": 10485.76 00:20:27.871 } 00:20:27.871 ], 00:20:27.871 "core_count": 1 00:20:27.871 } 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:27.871 | select(.opcode=="crc32c") 00:20:27.871 | "\(.module_name) \(.executed)"' 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80606 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80606 ']' 00:20:27.871 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80606 00:20:28.129 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:28.129 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.129 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80606 00:20:28.129 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:28.129 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:28.129 killing process with pid 80606 00:20:28.129 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80606' 00:20:28.129 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80606 00:20:28.129 Received shutdown signal, test time was about 2.000000 seconds 00:20:28.129 00:20:28.129 Latency(us) 00:20:28.129 [2024-12-09T04:10:10.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.129 [2024-12-09T04:10:10.079Z] =================================================================================================================== 00:20:28.129 [2024-12-09T04:10:10.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.129 04:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80606 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80659 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80659 /var/tmp/bperf.sock 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80659 ']' 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.388 04:10:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:28.388 [2024-12-09 04:10:10.184838] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:28.388 [2024-12-09 04:10:10.184959] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80659 ] 00:20:28.388 [2024-12-09 04:10:10.326318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.647 [2024-12-09 04:10:10.406222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.583 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.583 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:29.583 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:29.583 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:29.583 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:29.583 [2024-12-09 04:10:11.467769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:29.841 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:29.841 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.099 nvme0n1 00:20:30.099 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:30.099 04:10:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:30.099 Running I/O for 2 seconds... 00:20:32.006 17273.00 IOPS, 67.47 MiB/s [2024-12-09T04:10:13.957Z] 17653.50 IOPS, 68.96 MiB/s 00:20:32.007 Latency(us) 00:20:32.007 [2024-12-09T04:10:13.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.007 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:32.007 nvme0n1 : 2.01 17673.28 69.04 0.00 0.00 7236.56 2517.18 16443.58 00:20:32.007 [2024-12-09T04:10:13.957Z] =================================================================================================================== 00:20:32.007 [2024-12-09T04:10:13.957Z] Total : 17673.28 69.04 0.00 0.00 7236.56 2517.18 16443.58 00:20:32.007 { 00:20:32.007 "results": [ 00:20:32.007 { 00:20:32.007 "job": "nvme0n1", 00:20:32.007 "core_mask": "0x2", 00:20:32.007 "workload": "randwrite", 00:20:32.007 "status": "finished", 00:20:32.007 "queue_depth": 128, 00:20:32.007 "io_size": 4096, 00:20:32.007 "runtime": 2.005004, 00:20:32.007 "iops": 17673.28144981257, 00:20:32.007 "mibps": 69.03625566333035, 00:20:32.007 "io_failed": 0, 00:20:32.007 "io_timeout": 0, 00:20:32.007 "avg_latency_us": 7236.558939107456, 00:20:32.007 "min_latency_us": 2517.1781818181817, 00:20:32.007 "max_latency_us": 16443.578181818182 00:20:32.007 } 00:20:32.007 ], 00:20:32.007 "core_count": 1 00:20:32.007 } 00:20:32.319 04:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:32.319 04:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:32.319 04:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:32.319 04:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:32.319 | select(.opcode=="crc32c") 00:20:32.319 | "\(.module_name) \(.executed)"' 00:20:32.319 04:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80659 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80659 ']' 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80659 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.319 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80659 00:20:32.576 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:32.576 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:32.576 killing process with pid 80659 00:20:32.576 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80659' 00:20:32.576 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80659 00:20:32.576 Received shutdown signal, test time was about 2.000000 seconds 00:20:32.576 00:20:32.576 Latency(us) 00:20:32.576 [2024-12-09T04:10:14.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.576 [2024-12-09T04:10:14.526Z] =================================================================================================================== 00:20:32.576 [2024-12-09T04:10:14.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.576 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80659 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80720 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80720 /var/tmp/bperf.sock 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80720 ']' 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.835 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:32.835 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:32.835 Zero copy mechanism will not be used. 00:20:32.835 [2024-12-09 04:10:14.591045] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:32.835 [2024-12-09 04:10:14.591141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80720 ] 00:20:32.835 [2024-12-09 04:10:14.732114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.092 [2024-12-09 04:10:14.791769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.092 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.092 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:33.092 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:33.092 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:33.092 04:10:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:33.350 [2024-12-09 04:10:15.127484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:33.350 04:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:33.350 04:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:33.608 nvme0n1 00:20:33.608 04:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:33.608 04:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:33.869 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:33.869 Zero copy mechanism will not be used. 00:20:33.869 Running I/O for 2 seconds... 00:20:35.744 6219.00 IOPS, 777.38 MiB/s [2024-12-09T04:10:17.694Z] 6297.50 IOPS, 787.19 MiB/s 00:20:35.744 Latency(us) 00:20:35.744 [2024-12-09T04:10:17.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.744 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:35.744 nvme0n1 : 2.00 6294.62 786.83 0.00 0.00 2536.63 1802.24 8281.37 00:20:35.744 [2024-12-09T04:10:17.694Z] =================================================================================================================== 00:20:35.744 [2024-12-09T04:10:17.694Z] Total : 6294.62 786.83 0.00 0.00 2536.63 1802.24 8281.37 00:20:35.744 { 00:20:35.744 "results": [ 00:20:35.744 { 00:20:35.744 "job": "nvme0n1", 00:20:35.744 "core_mask": "0x2", 00:20:35.744 "workload": "randwrite", 00:20:35.744 "status": "finished", 00:20:35.744 "queue_depth": 16, 00:20:35.744 "io_size": 131072, 00:20:35.744 "runtime": 2.003298, 00:20:35.744 "iops": 6294.620171337465, 00:20:35.744 "mibps": 786.8275214171831, 00:20:35.744 "io_failed": 0, 00:20:35.744 "io_timeout": 0, 00:20:35.744 "avg_latency_us": 2536.6346985797704, 00:20:35.744 "min_latency_us": 1802.24, 00:20:35.744 "max_latency_us": 8281.367272727273 00:20:35.744 } 00:20:35.744 ], 00:20:35.744 "core_count": 1 00:20:35.744 } 00:20:35.744 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:35.744 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:35.744 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:35.744 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:35.744 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:35.744 | select(.opcode=="crc32c") 00:20:35.744 | "\(.module_name) \(.executed)"' 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80720 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80720 ']' 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80720 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80720 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:36.313 killing process with pid 80720 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80720' 00:20:36.313 Received shutdown signal, test time was about 2.000000 seconds 00:20:36.313 00:20:36.313 Latency(us) 00:20:36.313 [2024-12-09T04:10:18.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.313 [2024-12-09T04:10:18.263Z] =================================================================================================================== 00:20:36.313 [2024-12-09T04:10:18.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80720 00:20:36.313 04:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80720 00:20:36.313 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80529 00:20:36.313 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80529 ']' 00:20:36.313 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80529 00:20:36.313 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:36.313 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.313 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80529 00:20:36.572 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.572 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.572 killing process with pid 80529 00:20:36.572 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80529' 00:20:36.572 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80529 00:20:36.572 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80529 00:20:36.572 00:20:36.572 real 0m16.398s 00:20:36.572 user 0m30.828s 00:20:36.572 sys 0m5.546s 00:20:36.572 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.572 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:36.572 ************************************ 00:20:36.572 END TEST nvmf_digest_clean 00:20:36.572 ************************************ 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:36.830 ************************************ 00:20:36.830 START TEST nvmf_digest_error 00:20:36.830 ************************************ 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80796 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80796 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80796 ']' 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.830 04:10:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:36.831 [2024-12-09 04:10:18.620891] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:36.831 [2024-12-09 04:10:18.620998] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.831 [2024-12-09 04:10:18.771681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.089 [2024-12-09 04:10:18.828486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.089 [2024-12-09 04:10:18.828543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.089 [2024-12-09 04:10:18.828553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.089 [2024-12-09 04:10:18.828562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.089 [2024-12-09 04:10:18.828569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.089 [2024-12-09 04:10:18.828924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.681 [2024-12-09 04:10:19.549447] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.681 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.681 [2024-12-09 04:10:19.611930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:37.940 null0 00:20:37.940 [2024-12-09 04:10:19.666200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.940 [2024-12-09 04:10:19.690409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80828 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80828 /var/tmp/bperf.sock 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80828 ']' 00:20:37.940 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:37.941 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:37.941 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:37.941 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.941 04:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.941 [2024-12-09 04:10:19.756146] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:37.941 [2024-12-09 04:10:19.756264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80828 ] 00:20:38.199 [2024-12-09 04:10:19.910187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.199 [2024-12-09 04:10:19.980799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.199 [2024-12-09 04:10:20.060550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:39.133 04:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.133 04:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:39.133 04:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:39.133 04:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:39.134 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:39.134 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.134 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:39.134 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.134 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.134 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.698 nvme0n1 00:20:39.698 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:39.698 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.698 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:39.698 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.698 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:39.698 04:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:39.698 Running I/O for 2 seconds... 00:20:39.698 [2024-12-09 04:10:21.529842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:39.698 [2024-12-09 04:10:21.529910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.698 [2024-12-09 04:10:21.529924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.698 [2024-12-09 04:10:21.544846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:39.698 [2024-12-09 04:10:21.544885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.698 [2024-12-09 04:10:21.544914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.698 [2024-12-09 04:10:21.559855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:39.698 [2024-12-09 04:10:21.559892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.698 [2024-12-09 04:10:21.559920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.698 [2024-12-09 04:10:21.575331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:39.698 [2024-12-09 04:10:21.575367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.698 [2024-12-09 04:10:21.575395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.698 [2024-12-09 04:10:21.590928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:39.698 [2024-12-09 04:10:21.591163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.698 [2024-12-09 04:10:21.591230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.698 [2024-12-09 04:10:21.607893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:39.698 [2024-12-09 04:10:21.607956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.698 [2024-12-09 04:10:21.607968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.698 [2024-12-09 04:10:21.624039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:39.698 [2024-12-09 04:10:21.624075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.698 [2024-12-09 04:10:21.624102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.698 [2024-12-09 04:10:21.638645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:39.698 [2024-12-09 04:10:21.638858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.698 [2024-12-09 04:10:21.638891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.654376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.654415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.654458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.668748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.668784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.668812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.683193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.683236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.683264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.697486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.697522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.697549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.711628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.711676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.711703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.725979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.726014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.726042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.740260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.740296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.740324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.754440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.754475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.754502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.768453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.768489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.768516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.782717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.782752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.782780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.797060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.797097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.797125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.811374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.811411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.811439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.825507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.825545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.825573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.839581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.839618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.839645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.853729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.853765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.853793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.868390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.868426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.868453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.882610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.882645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.882673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.896853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.896888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.896916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.911025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.911220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.911253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.925336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.925374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.925401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.939466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.939502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.939530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.014 [2024-12-09 04:10:21.953665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.014 [2024-12-09 04:10:21.953700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.014 [2024-12-09 04:10:21.953728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:21.968629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:21.968664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.273 [2024-12-09 04:10:21.968691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:21.982794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:21.982829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.273 [2024-12-09 04:10:21.982857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:21.996926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:21.996962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.273 [2024-12-09 04:10:21.996989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:22.011127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:22.011163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.273 [2024-12-09 04:10:22.011219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:22.025277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:22.025312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.273 [2024-12-09 04:10:22.025339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:22.039297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:22.039333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.273 [2024-12-09 04:10:22.039361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:22.053502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:22.053538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.273 [2024-12-09 04:10:22.053565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:22.067672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:22.067708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.273 [2024-12-09 04:10:22.067735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.273 [2024-12-09 04:10:22.082197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.273 [2024-12-09 04:10:22.082233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.082260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.098643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.098684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.098712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.113721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.113756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.113784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.128787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.128822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.128850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.143507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.143543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.143571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.158148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.158212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.158241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.173419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.173603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.173635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.188160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.188226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.188255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.203054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.203089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.203117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.274 [2024-12-09 04:10:22.217817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.274 [2024-12-09 04:10:22.217852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.274 [2024-12-09 04:10:22.217879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.233184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.233218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.233246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.247653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.247687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.247715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.261770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.261806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.261835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.276347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.276534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.276566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.292159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.292256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.292272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.307073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.307305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.307323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.321874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.321910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.321937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.336200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.336235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.336262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.350391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.350634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.350651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.364978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.365014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.365042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.379012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.379047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.379075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.393695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.393731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.393759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.409455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.409501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.409513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.425088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.425319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.532 [2024-12-09 04:10:22.425353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.532 [2024-12-09 04:10:22.440350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.532 [2024-12-09 04:10:22.440545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.533 [2024-12-09 04:10:22.440578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.533 [2024-12-09 04:10:22.461731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.533 [2024-12-09 04:10:22.461767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.533 [2024-12-09 04:10:22.461795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.533 [2024-12-09 04:10:22.476663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.533 [2024-12-09 04:10:22.476699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.533 [2024-12-09 04:10:22.476727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.492260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.492296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.492323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 17079.00 IOPS, 66.71 MiB/s [2024-12-09T04:10:22.740Z] [2024-12-09 04:10:22.507142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.507222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.507251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.521861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.521896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.521924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.537150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.537215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.537244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.552260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.552303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.552332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.568112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.568149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.568194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.583279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.583315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.583343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.599294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.599329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.599356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.615409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.615447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.615474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.632649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.632683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.632710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.649546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.649580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.649608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.664485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.664521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.664549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.790 [2024-12-09 04:10:22.679566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.790 [2024-12-09 04:10:22.679603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.790 [2024-12-09 04:10:22.679631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.791 [2024-12-09 04:10:22.694314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.791 [2024-12-09 04:10:22.694527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.791 [2024-12-09 04:10:22.694578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.791 [2024-12-09 04:10:22.709982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.791 [2024-12-09 04:10:22.710244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.791 [2024-12-09 04:10:22.710278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:40.791 [2024-12-09 04:10:22.725391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:40.791 [2024-12-09 04:10:22.725610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.791 [2024-12-09 04:10:22.725776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.741865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.742126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.742301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.758132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.758394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.758582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.774137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.774394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.774553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.789507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.789717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.789856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.805012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.805281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.805410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.820118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.820344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.820501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.835214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.835430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.835625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.850290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.850506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.850661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.866250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.866477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.866708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.881824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.882021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.882054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.896741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.896934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.896967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.911860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.911898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.911925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.926551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.926746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.926778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.940799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.940836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.940864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.955060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.955096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.955124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.969168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.969231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.969260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.049 [2024-12-09 04:10:22.983941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.049 [2024-12-09 04:10:22.983976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.049 [2024-12-09 04:10:22.984004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.307 [2024-12-09 04:10:22.998800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:22.999012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:22.999029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.013406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.013600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.027701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.027739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.027766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.041814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.041851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.041877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.056056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.056093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.056121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.070274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.070312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.070339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.084298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.084334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.084361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.098440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.098672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.098705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.112761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.112799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.112827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.126815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.126850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.126878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.140820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.140857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.140884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.155010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.155045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.155072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.169032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.169069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.169097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.183050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.183281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.183313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.197207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.197242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.197269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.211295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.211331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.211358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.225276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.225312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.225339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.308 [2024-12-09 04:10:23.239973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.308 [2024-12-09 04:10:23.240008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.308 [2024-12-09 04:10:23.240036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.566 [2024-12-09 04:10:23.255141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.566 [2024-12-09 04:10:23.255380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.566 [2024-12-09 04:10:23.255413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.566 [2024-12-09 04:10:23.270425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.566 [2024-12-09 04:10:23.270609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.566 [2024-12-09 04:10:23.270642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.566 [2024-12-09 04:10:23.285429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.566 [2024-12-09 04:10:23.285639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.566 [2024-12-09 04:10:23.285673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.566 [2024-12-09 04:10:23.300617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.566 [2024-12-09 04:10:23.300653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.566 [2024-12-09 04:10:23.300681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.566 [2024-12-09 04:10:23.317168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.566 [2024-12-09 04:10:23.317248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.566 [2024-12-09 04:10:23.317263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.566 [2024-12-09 04:10:23.332696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.566 [2024-12-09 04:10:23.332731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.566 [2024-12-09 04:10:23.332758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.348378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.348414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.348441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.362914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.363128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.363146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.377745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.377938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.377970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.392548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.392746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.392780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.407315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.407508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.407541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.428885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.428922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.428950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.443709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.443747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.443774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.457999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.458044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.458071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.472357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.472393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.472421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.486494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.486530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.486558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 [2024-12-09 04:10:23.500803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfefb50) 00:20:41.567 [2024-12-09 04:10:23.500838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.567 [2024-12-09 04:10:23.500865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:41.567 17015.00 IOPS, 66.46 MiB/s 00:20:41.567 Latency(us) 00:20:41.567 [2024-12-09T04:10:23.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.567 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:41.567 nvme0n1 : 2.01 17010.47 66.45 0.00 0.00 7519.39 6821.70 28597.53 00:20:41.567 [2024-12-09T04:10:23.517Z] =================================================================================================================== 00:20:41.567 [2024-12-09T04:10:23.517Z] Total : 17010.47 66.45 0.00 0.00 7519.39 6821.70 28597.53 00:20:41.567 { 00:20:41.567 "results": [ 00:20:41.567 { 00:20:41.567 "job": "nvme0n1", 00:20:41.567 "core_mask": "0x2", 00:20:41.567 "workload": "randread", 00:20:41.567 "status": "finished", 00:20:41.567 "queue_depth": 128, 00:20:41.567 "io_size": 4096, 00:20:41.567 "runtime": 2.008057, 00:20:41.567 "iops": 17010.473308277604, 00:20:41.567 "mibps": 66.44716136045939, 00:20:41.567 "io_failed": 0, 00:20:41.567 "io_timeout": 0, 00:20:41.567 "avg_latency_us": 7519.388612277704, 00:20:41.567 "min_latency_us": 6821.701818181818, 00:20:41.567 "max_latency_us": 28597.52727272727 00:20:41.567 } 00:20:41.567 ], 00:20:41.567 "core_count": 1 00:20:41.567 } 00:20:41.824 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:41.824 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:41.824 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:41.824 | .driver_specific 00:20:41.824 | .nvme_error 00:20:41.824 | .status_code 00:20:41.824 | .command_transient_transport_error' 00:20:41.824 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 133 > 0 )) 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80828 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80828 ']' 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80828 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80828 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.083 killing process with pid 80828 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80828' 00:20:42.083 Received shutdown signal, test time was about 2.000000 seconds 00:20:42.083 00:20:42.083 Latency(us) 00:20:42.083 [2024-12-09T04:10:24.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.083 [2024-12-09T04:10:24.033Z] =================================================================================================================== 00:20:42.083 [2024-12-09T04:10:24.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80828 00:20:42.083 04:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80828 00:20:42.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80888 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80888 /var/tmp/bperf.sock 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80888 ']' 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.342 04:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.342 [2024-12-09 04:10:24.164355] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:42.342 [2024-12-09 04:10:24.164991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80888 ] 00:20:42.342 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:42.342 Zero copy mechanism will not be used. 00:20:42.611 [2024-12-09 04:10:24.312042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.611 [2024-12-09 04:10:24.378989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.611 [2024-12-09 04:10:24.452497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:43.193 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.193 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:43.193 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.193 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.450 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:43.450 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.450 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.450 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.450 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.450 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:44.016 nvme0n1 00:20:44.016 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:44.016 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.016 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:44.016 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.016 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:44.016 04:10:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:44.016 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:44.016 Zero copy mechanism will not be used. 00:20:44.016 Running I/O for 2 seconds... 00:20:44.016 [2024-12-09 04:10:25.835230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.016 [2024-12-09 04:10:25.835484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.016 [2024-12-09 04:10:25.835646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.016 [2024-12-09 04:10:25.840486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.016 [2024-12-09 04:10:25.840765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.016 [2024-12-09 04:10:25.840911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.016 [2024-12-09 04:10:25.845794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.016 [2024-12-09 04:10:25.845979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.016 [2024-12-09 04:10:25.846145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.016 [2024-12-09 04:10:25.851062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.016 [2024-12-09 04:10:25.851101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.016 [2024-12-09 04:10:25.851131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.016 [2024-12-09 04:10:25.855801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.016 [2024-12-09 04:10:25.856010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.016 [2024-12-09 04:10:25.856028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.016 [2024-12-09 04:10:25.860642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.016 [2024-12-09 04:10:25.860684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.016 [2024-12-09 04:10:25.860713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.016 [2024-12-09 04:10:25.865316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.865352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.865380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.869804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.869840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.869868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.874170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.874234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.874248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.878554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.878588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.878615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.883073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.883108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.883137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.887783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.887818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.887846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.892460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.892496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.892508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.897147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.897226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.897240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.901778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.901973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.902006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.906788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.906823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.906853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.911414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.911448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.911476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.915977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.916022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.916051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.920469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.920503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.920531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.924824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.924859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.924887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.929193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.929226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.929254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.933566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.933599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.933628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.937931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.937964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.937992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.942380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.942423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.942451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.947043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.947078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.947106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.951885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.952091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.952124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.957163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.957236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.957265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.017 [2024-12-09 04:10:25.962133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.017 [2024-12-09 04:10:25.962198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.017 [2024-12-09 04:10:25.962213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:25.967193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:25.967412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:25.967445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:25.972527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:25.972565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:25.972577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:25.977145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:25.977224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:25.977238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:25.981843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:25.981879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:25.981907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:25.986519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:25.986554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:25.986582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:25.991242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:25.991275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:25.991303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:25.995937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:25.995971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:25.996010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:26.000703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:26.000738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:26.000767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:26.005253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:26.005288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:26.005316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:26.009889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:26.009924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:26.009953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:26.014530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:26.014576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:26.014604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:26.019046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:26.019273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.277 [2024-12-09 04:10:26.019413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.277 [2024-12-09 04:10:26.024011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.277 [2024-12-09 04:10:26.024258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.024295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.028658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.028694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.028723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.032972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.033007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.033035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.037409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.037445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.037457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.041858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.041892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.041921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.046308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.046344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.046357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.050659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.050694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.050722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.055118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.055152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.055195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.059570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.059604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.059632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.063953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.063988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.064016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.068560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.068595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.068623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.073090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.073124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.073152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.077605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.077814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.077832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.082707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.082790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.082819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.087432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.087466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.087494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.092092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.092127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.092155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.096586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.096803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.096820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.101536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.101571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.101599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.106041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.106076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.106128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.110670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.110703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.110731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.115228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.115262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.115290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.119657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.119691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.119720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.124086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.124120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.124149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.128882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.128918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.128946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.133917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.133953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.133981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.139059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.139255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.139290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.144743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.278 [2024-12-09 04:10:26.144781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.278 [2024-12-09 04:10:26.144810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.278 [2024-12-09 04:10:26.149804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.149841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.149870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.154777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.154974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.155021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.160014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.160051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.160079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.164645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.164680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.164709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.169221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.169255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.169284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.173649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.173683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.173711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.178325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.178361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.178373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.182840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.182876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.182903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.187673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.187708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.187736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.192383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.192418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.192446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.196920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.197116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.197149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.201830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.201866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.201894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.206366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.206412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.206425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.211012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.211055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.211066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.215870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.215906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.215934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.279 [2024-12-09 04:10:26.220487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.279 [2024-12-09 04:10:26.220523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.279 [2024-12-09 04:10:26.220551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.539 [2024-12-09 04:10:26.225322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.539 [2024-12-09 04:10:26.225357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.539 [2024-12-09 04:10:26.225386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.229823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.229861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.229890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.234559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.234595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.234623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.239226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.239260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.239289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.243907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.243942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.243970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.248684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.248720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.248748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.253394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.253429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.253457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.257921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.257956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.257984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.262291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.262327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.262341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.266779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.266814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.266842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.271204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.271251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.271280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.275638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.275677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.275705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.279941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.279977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.280004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.284768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.284803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.284831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.289411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.289621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.289643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.294292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.294329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.294342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.298855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.298890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.298918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.303512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.303549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.303576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.308099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.308135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.308163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.312618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.312819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.312854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.317467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.317504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.317517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.321844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.321895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.321924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.326314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.326353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.326366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.330717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.330784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.330812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.540 [2024-12-09 04:10:26.335528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.540 [2024-12-09 04:10:26.335580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.540 [2024-12-09 04:10:26.335607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.340330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.340383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.340411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.344894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.344945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.344973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.349542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.349593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.349621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.354179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.354226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.354239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.358710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.358761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.358789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.363043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.363095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.363122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.367533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.367587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.367614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.372054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.372105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.372133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.376850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.376902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.376914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.381324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.381376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.381404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.386038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.386088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.386162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.391340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.391394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.391422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.396321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.396375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.396403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.401173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.401252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.401282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.406050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.406126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.406146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.410920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.410979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.411007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.415885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.415934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.415962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.420463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.420515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.420544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.424896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.424949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.424977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.429291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.429341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.429368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.433713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.433762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.433789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.438171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.438212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.438240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.442816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.442867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.442894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.447433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.447485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.447512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.452096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.452150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.452178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.456541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.541 [2024-12-09 04:10:26.456592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.541 [2024-12-09 04:10:26.456620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.541 [2024-12-09 04:10:26.460959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.542 [2024-12-09 04:10:26.461013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.542 [2024-12-09 04:10:26.461040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.542 [2024-12-09 04:10:26.465463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.542 [2024-12-09 04:10:26.465518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.542 [2024-12-09 04:10:26.465546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.542 [2024-12-09 04:10:26.469999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.542 [2024-12-09 04:10:26.470053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.542 [2024-12-09 04:10:26.470081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.542 [2024-12-09 04:10:26.474720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.542 [2024-12-09 04:10:26.474771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.542 [2024-12-09 04:10:26.474798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.542 [2024-12-09 04:10:26.479218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.542 [2024-12-09 04:10:26.479269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.542 [2024-12-09 04:10:26.479297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.542 [2024-12-09 04:10:26.483598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.542 [2024-12-09 04:10:26.483658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.542 [2024-12-09 04:10:26.483686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.803 [2024-12-09 04:10:26.488215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.803 [2024-12-09 04:10:26.488249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.803 [2024-12-09 04:10:26.488277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.803 [2024-12-09 04:10:26.492759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.803 [2024-12-09 04:10:26.492809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.803 [2024-12-09 04:10:26.492836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.803 [2024-12-09 04:10:26.497564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.803 [2024-12-09 04:10:26.497614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.803 [2024-12-09 04:10:26.497641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.803 [2024-12-09 04:10:26.502265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.803 [2024-12-09 04:10:26.502301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.803 [2024-12-09 04:10:26.502329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.803 [2024-12-09 04:10:26.506886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.803 [2024-12-09 04:10:26.506937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.803 [2024-12-09 04:10:26.506972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.803 [2024-12-09 04:10:26.511611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.803 [2024-12-09 04:10:26.511679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.803 [2024-12-09 04:10:26.511707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.516716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.516766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.516794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.521340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.521391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.521420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.525790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.525843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.525869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.530171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.530219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.530249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.534607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.534658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.534686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.538914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.538966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.538993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.543343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.543393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.543421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.547863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.547915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.547943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.552533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.552584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.552611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.557046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.557098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.557125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.561553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.561604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.561632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.565900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.565950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.565977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.570325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.570364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.570393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.574620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.574671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.574698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.579017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.579068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.579095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.583412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.583464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.583492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.587808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.587858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.587885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.592208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.592242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.592269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.596653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.596703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.596731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.601470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.601521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.804 [2024-12-09 04:10:26.601548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.804 [2024-12-09 04:10:26.606253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.804 [2024-12-09 04:10:26.606289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.606317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.610849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.610899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.610927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.615570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.615623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.615650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.620406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.620469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.620497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.625205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.625257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.625285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.629658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.629710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.629738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.634312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.634349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.634377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.638769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.638805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.638833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.643131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.643209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.643224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.647379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.647432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.647460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.651954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.652012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.652039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.656576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.656628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.656666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.661132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.661208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.661222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.665634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.665689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.665717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.670178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.670224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.670236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.674741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.674792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.674820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.679340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.679393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.679420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.683752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.683804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.683831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.688263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.688325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.688356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.692586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.692650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.692677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.696911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.805 [2024-12-09 04:10:26.696962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.805 [2024-12-09 04:10:26.696989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.805 [2024-12-09 04:10:26.701583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.701633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.701660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.706588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.706638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.706666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.711574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.711625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.711652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.716883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.716934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.716961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.721663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.721714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.721742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.726463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.726500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.726528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.731062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.731115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.731143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.735645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.735697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.735725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.740218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.740271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.740299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.744787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.744841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.744869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.806 [2024-12-09 04:10:26.749403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:44.806 [2024-12-09 04:10:26.749456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.806 [2024-12-09 04:10:26.749483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.753868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.753919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.753946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.758621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.758658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.758686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.763483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.763535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.763562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.768138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.768201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.768231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.772758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.772817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.772845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.777412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.777466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.777493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.782237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.782269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.782296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.787031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.787068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.787095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.791543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.791594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.791621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.795908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.795962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.795990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.800331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.800382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.800410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.804887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.804938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.804965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.809726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.809778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.814625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.814681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.814709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.819309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.819360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.067 [2024-12-09 04:10:26.819388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.067 [2024-12-09 04:10:26.824075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.067 [2024-12-09 04:10:26.824127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.824155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.828711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.828764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.828791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.068 6665.00 IOPS, 833.12 MiB/s [2024-12-09T04:10:27.018Z] [2024-12-09 04:10:26.834763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.834815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.834843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.839367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.839404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.839431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.844001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.844051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.844078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.848616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.848665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.848692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.853089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.853140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.853167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.857625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.857676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.857704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.862249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.862286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.862314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.866949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.867020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.867049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.871779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.871830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.871858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.876624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.876675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.876703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.881295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.881345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.881372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.885896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.885947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.885958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.890759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.890808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.890836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.895328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.895379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.895406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.899875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.899926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.899965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.904464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.904517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.904544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.908878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.908929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.908956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.913314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.913364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.913391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.068 [2024-12-09 04:10:26.917701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.068 [2024-12-09 04:10:26.917752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.068 [2024-12-09 04:10:26.917780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.922541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.922592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.922619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.927503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.927554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.927581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.932248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.932299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.932327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.936801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.936852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.936879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.941586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.941637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.941665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.946304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.946341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.946368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.950594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.950645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.950673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.955198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.955248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.955276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.959519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.959570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.959598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.963844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.963895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.963923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.968426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.968477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.968505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.972958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.973010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.973046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.977625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.977679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.977706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.982451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.982487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.982515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.987096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.987148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.987176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.991716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.991767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.991795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:26.996231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:26.996283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:26.996311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:27.000611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:27.000663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:27.000690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:27.004997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:27.005047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:27.005075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:27.009320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.069 [2024-12-09 04:10:27.009369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.069 [2024-12-09 04:10:27.009397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.069 [2024-12-09 04:10:27.013956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.070 [2024-12-09 04:10:27.014007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.070 [2024-12-09 04:10:27.014034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.018838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.018887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.018914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.023615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.023676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.023703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.028172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.028234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.028263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.032572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.032623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.032650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.037107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.037157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.037197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.041622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.041673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.041700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.046238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.046304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.050544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.050601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.050629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.055001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.055052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.055080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.059345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.059397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.059424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.063876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.063928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.063956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.068617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.068679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.068707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.073244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.073296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.073324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.077922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.077973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.078008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.082508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.082571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.082599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.086995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.087046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.087073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.091502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.091554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.091582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.095937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.095990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.096024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.100367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.100421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.100449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.104705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.104757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.104784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.330 [2024-12-09 04:10:27.109019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.330 [2024-12-09 04:10:27.109070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.330 [2024-12-09 04:10:27.109097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.113364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.113400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.113428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.117692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.117742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.117769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.122314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.122350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.122362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.126892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.126942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.126970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.131564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.131600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.131627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.136039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.136092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.136120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.140597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.140650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.140677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.144997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.145051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.145078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.149711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.149763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.149791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.154569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.154619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.154647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.159157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.159219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.159247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.163812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.163864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.163891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.168367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.168417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.168445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.173036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.173087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.173115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.177865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.177917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.177945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.182601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.182652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.182680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.187335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.187386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.187413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.191981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.192048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.192076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.196740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.196791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.196819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.201299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.201350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.201378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.205689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.205739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.205767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.210255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.210291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.210320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.214805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.214856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.214883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.219156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.219215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.219244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.223558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.223609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.223636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.228163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.228222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.228251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.331 [2024-12-09 04:10:27.232810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.331 [2024-12-09 04:10:27.232863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.331 [2024-12-09 04:10:27.232891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.237414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.237465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.237492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.241940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.241991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.242020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.246554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.246606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.246633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.251053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.251104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.251131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.255487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.255541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.255568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.259817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.259868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.259895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.264132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.264209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.264222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.268462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.268513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.268541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.332 [2024-12-09 04:10:27.272880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.332 [2024-12-09 04:10:27.272930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.332 [2024-12-09 04:10:27.272958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.277814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.277864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.277892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.282642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.282695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.282722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.287433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.287484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.287511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.292106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.292158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.292196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.296774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.296826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.296854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.301561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.301612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.301639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.306300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.306336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.306363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.310961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.310998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.311026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.315519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.315574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.315602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.319870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.319921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.319949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.324214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.324265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.324293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.328858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.328909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.328937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.333598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.333656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.333707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.338771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.338823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.338851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.343526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.343594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.343606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.348351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.348389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.348418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.353311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.353361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.353390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.358063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.358156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.592 [2024-12-09 04:10:27.358197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.592 [2024-12-09 04:10:27.362981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.592 [2024-12-09 04:10:27.363033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.363061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.367863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.367916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.367943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.372610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.372679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.372708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.377040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.377091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.377121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.381510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.381577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.381605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.386126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.386198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.386213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.390644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.390692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.390720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.395211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.395277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.395307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.399919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.399971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.400000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.404686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.404736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.404765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.409718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.409770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.409800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.414812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.414893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.419444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.419501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.419515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.424211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.424277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.424306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.428714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.428767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.428795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.433135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.433232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.433247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.437797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.437849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.437877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.442414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.442466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.442494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.446713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.446767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.446794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.451075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.451127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.451156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.455419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.455471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.455499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.459888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.459940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.593 [2024-12-09 04:10:27.459968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.593 [2024-12-09 04:10:27.464240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.593 [2024-12-09 04:10:27.464291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.464319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.468543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.468596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.468624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.472863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.472915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.472943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.477267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.477304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.477333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.481548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.481585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.481628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.485937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.485971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.485999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.490479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.490516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.490546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.494837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.495042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.495076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.499509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.499547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.499576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.503822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.503854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.503866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.508216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.508253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.508281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.512515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.512551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.512580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.516746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.516783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.516812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.521155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.521219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.521249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.525465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.525678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.525712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.530240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.530425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.530579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.594 [2024-12-09 04:10:27.535257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.594 [2024-12-09 04:10:27.535487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.594 [2024-12-09 04:10:27.535641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.540373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.540621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.540822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.545318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.545504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.545653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.549995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.550271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.550433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.554756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.554979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.555115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.559618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.559828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.560028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.564359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.564575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.564701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.569148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.569366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.569526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.573997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.574233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.574436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.578829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.579053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.579268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.583681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.583897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.584091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.588653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.588860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.589057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.593701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.593927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.594039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.598401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.598472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.598501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.602715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.602751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.602779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.607131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.607220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.607234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.611666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.611702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.855 [2024-12-09 04:10:27.611730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.855 [2024-12-09 04:10:27.616135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.855 [2024-12-09 04:10:27.616217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.616231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.620378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.620414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.620443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.624595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.624633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.624662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.628845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.628883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.628911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.633065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.633103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.633132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.637340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.637376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.637405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.641566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.641617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.641648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.645899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.645933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.645961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.650284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.650331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.650344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.654540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.654577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.654605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.658778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.658824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.658852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.663072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.663108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.663135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.667383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.667419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.667447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.671631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.671668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.671697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.675926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.675962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.675990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.680146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.680211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.680240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.684384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.684420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.684448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.688652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.688690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.688718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.692994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.693030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.693059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.697360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.697397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.697424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.701553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.701603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.701632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.705843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.705879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.705907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.710378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.710413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.710449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.714550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.714584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.714611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.718772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.718807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.718835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.723326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.723361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.723388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.727899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.727935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.856 [2024-12-09 04:10:27.727963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.856 [2024-12-09 04:10:27.732710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.856 [2024-12-09 04:10:27.732746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.732775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.737625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.737689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.737701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.742442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.742636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.742668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.747353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.747389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.747417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.752039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.752079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.752107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.756689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.756725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.756753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.761259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.761296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.761324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.765857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.765896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.765925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.770452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.770486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.770514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.774694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.774727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.774755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.778985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.779029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.779057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.783341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.783377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.783406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.787658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.787694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.787723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.792051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.792086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.792114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.796367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.796403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.796431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:45.857 [2024-12-09 04:10:27.800727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:45.857 [2024-12-09 04:10:27.800761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.857 [2024-12-09 04:10:27.800789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:46.115 [2024-12-09 04:10:27.805191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:46.115 [2024-12-09 04:10:27.805227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.115 [2024-12-09 04:10:27.805255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:46.115 [2024-12-09 04:10:27.809622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:46.115 [2024-12-09 04:10:27.809664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.115 [2024-12-09 04:10:27.809692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:46.115 [2024-12-09 04:10:27.813962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:46.115 [2024-12-09 04:10:27.813996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.115 [2024-12-09 04:10:27.814024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:46.115 [2024-12-09 04:10:27.818305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:46.115 [2024-12-09 04:10:27.818341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.115 [2024-12-09 04:10:27.818369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:46.115 [2024-12-09 04:10:27.822679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:46.115 [2024-12-09 04:10:27.822715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.115 [2024-12-09 04:10:27.822743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:46.115 [2024-12-09 04:10:27.827155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:46.115 [2024-12-09 04:10:27.827234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.115 [2024-12-09 04:10:27.827248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:46.115 [2024-12-09 04:10:27.831507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15f9620) 00:20:46.115 [2024-12-09 04:10:27.831541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.115 [2024-12-09 04:10:27.831569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:46.115 6734.50 IOPS, 841.81 MiB/s 00:20:46.115 Latency(us) 00:20:46.115 [2024-12-09T04:10:28.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.115 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:46.115 nvme0n1 : 2.00 6732.49 841.56 0.00 0.00 2373.45 1966.08 6166.34 00:20:46.115 [2024-12-09T04:10:28.065Z] =================================================================================================================== 00:20:46.115 [2024-12-09T04:10:28.065Z] Total : 6732.49 841.56 0.00 0.00 2373.45 1966.08 6166.34 00:20:46.115 { 00:20:46.115 "results": [ 00:20:46.115 { 00:20:46.115 "job": "nvme0n1", 00:20:46.115 "core_mask": "0x2", 00:20:46.115 "workload": "randread", 00:20:46.115 "status": "finished", 00:20:46.115 "queue_depth": 16, 00:20:46.115 "io_size": 131072, 00:20:46.115 "runtime": 2.002973, 00:20:46.115 "iops": 6732.492150418403, 00:20:46.115 "mibps": 841.5615188023004, 00:20:46.115 "io_failed": 0, 00:20:46.115 "io_timeout": 0, 00:20:46.115 "avg_latency_us": 2373.4488556308356, 00:20:46.115 "min_latency_us": 1966.08, 00:20:46.115 "max_latency_us": 6166.341818181818 00:20:46.115 } 00:20:46.115 ], 00:20:46.115 "core_count": 1 00:20:46.115 } 00:20:46.115 04:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:46.115 04:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:46.115 04:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:46.115 04:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:46.115 | .driver_specific 00:20:46.115 | .nvme_error 00:20:46.115 | .status_code 00:20:46.115 | .command_transient_transport_error' 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 435 > 0 )) 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80888 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80888 ']' 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80888 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80888 00:20:46.373 killing process with pid 80888 00:20:46.373 Received shutdown signal, test time was about 2.000000 seconds 00:20:46.373 00:20:46.373 Latency(us) 00:20:46.373 [2024-12-09T04:10:28.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.373 [2024-12-09T04:10:28.323Z] =================================================================================================================== 00:20:46.373 [2024-12-09T04:10:28.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80888' 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80888 00:20:46.373 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80888 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80943 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80943 /var/tmp/bperf.sock 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80943 ']' 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:46.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.630 04:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.630 [2024-12-09 04:10:28.439264] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:46.630 [2024-12-09 04:10:28.439571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80943 ] 00:20:46.888 [2024-12-09 04:10:28.587104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.888 [2024-12-09 04:10:28.651573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.888 [2024-12-09 04:10:28.726963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.822 04:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:48.080 nvme0n1 00:20:48.080 04:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:48.080 04:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.080 04:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:48.080 04:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.080 04:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:48.080 04:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:48.338 Running I/O for 2 seconds... 00:20:48.338 [2024-12-09 04:10:30.132487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef7100 00:20:48.338 [2024-12-09 04:10:30.133789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.133832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.146370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef7970 00:20:48.338 [2024-12-09 04:10:30.148155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.148232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.161434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef81e0 00:20:48.338 [2024-12-09 04:10:30.162984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.163229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.176579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef8a50 00:20:48.338 [2024-12-09 04:10:30.177974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.178008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.190827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef92c0 00:20:48.338 [2024-12-09 04:10:30.192333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.192364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.205309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef9b30 00:20:48.338 [2024-12-09 04:10:30.206839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.206874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.219769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efa3a0 00:20:48.338 [2024-12-09 04:10:30.221239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.221445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.234184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efac10 00:20:48.338 [2024-12-09 04:10:30.235702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.235902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.249066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efb480 00:20:48.338 [2024-12-09 04:10:30.250746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.250974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.263993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efbcf0 00:20:48.338 [2024-12-09 04:10:30.265404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.265618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:48.338 [2024-12-09 04:10:30.278231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efc560 00:20:48.338 [2024-12-09 04:10:30.279624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.338 [2024-12-09 04:10:30.279822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:48.597 [2024-12-09 04:10:30.293350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efcdd0 00:20:48.597 [2024-12-09 04:10:30.294792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.597 [2024-12-09 04:10:30.294997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:48.597 [2024-12-09 04:10:30.309095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efd640 00:20:48.597 [2024-12-09 04:10:30.310687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.597 [2024-12-09 04:10:30.310900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:48.597 [2024-12-09 04:10:30.324012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efdeb0 00:20:48.597 [2024-12-09 04:10:30.325387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.597 [2024-12-09 04:10:30.325614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:48.597 [2024-12-09 04:10:30.338161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efe720 00:20:48.597 [2024-12-09 04:10:30.339329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.597 [2024-12-09 04:10:30.339371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:48.597 [2024-12-09 04:10:30.351539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eff3c8 00:20:48.597 [2024-12-09 04:10:30.352549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.352643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.371059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eff3c8 00:20:48.598 [2024-12-09 04:10:30.373088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.373122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.385354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efe720 00:20:48.598 [2024-12-09 04:10:30.387887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.387931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.402411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efdeb0 00:20:48.598 [2024-12-09 04:10:30.404917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.404950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.417385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efd640 00:20:48.598 [2024-12-09 04:10:30.419756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.419790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.432104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efcdd0 00:20:48.598 [2024-12-09 04:10:30.434254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.434454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.446619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efc560 00:20:48.598 [2024-12-09 04:10:30.449206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.449236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.461848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efbcf0 00:20:48.598 [2024-12-09 04:10:30.464263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.464298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.477523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efb480 00:20:48.598 [2024-12-09 04:10:30.479826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.480026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.492965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efac10 00:20:48.598 [2024-12-09 04:10:30.495234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.495447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.508823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efa3a0 00:20:48.598 [2024-12-09 04:10:30.511212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.511431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.524114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef9b30 00:20:48.598 [2024-12-09 04:10:30.526177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.526220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:48.598 [2024-12-09 04:10:30.538168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef92c0 00:20:48.598 [2024-12-09 04:10:30.540251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.598 [2024-12-09 04:10:30.540285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:48.856 [2024-12-09 04:10:30.553537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef8a50 00:20:48.856 [2024-12-09 04:10:30.555880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.856 [2024-12-09 04:10:30.555915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:48.856 [2024-12-09 04:10:30.568885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef81e0 00:20:48.856 [2024-12-09 04:10:30.571115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.856 [2024-12-09 04:10:30.571150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:48.856 [2024-12-09 04:10:30.584019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef7970 00:20:48.856 [2024-12-09 04:10:30.586451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.856 [2024-12-09 04:10:30.586658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:48.856 [2024-12-09 04:10:30.600302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef7100 00:20:48.856 [2024-12-09 04:10:30.602548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.856 [2024-12-09 04:10:30.602583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:48.856 [2024-12-09 04:10:30.615654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef6890 00:20:48.856 [2024-12-09 04:10:30.617657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.856 [2024-12-09 04:10:30.617690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:48.856 [2024-12-09 04:10:30.629547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef6020 00:20:48.856 [2024-12-09 04:10:30.631786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.856 [2024-12-09 04:10:30.631818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:48.856 [2024-12-09 04:10:30.644178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef57b0 00:20:48.856 [2024-12-09 04:10:30.646057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.856 [2024-12-09 04:10:30.646089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.658552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef4f40 00:20:48.857 [2024-12-09 04:10:30.660616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.660648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.673309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef46d0 00:20:48.857 [2024-12-09 04:10:30.675386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.675421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.687808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef3e60 00:20:48.857 [2024-12-09 04:10:30.689603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.689635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.701560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef35f0 00:20:48.857 [2024-12-09 04:10:30.703495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.703527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.715690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef2d80 00:20:48.857 [2024-12-09 04:10:30.717568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.717630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.729900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef2510 00:20:48.857 [2024-12-09 04:10:30.731770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.731801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.744017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef1ca0 00:20:48.857 [2024-12-09 04:10:30.745719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.745750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.757685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef1430 00:20:48.857 [2024-12-09 04:10:30.759501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.759534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.772453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef0bc0 00:20:48.857 [2024-12-09 04:10:30.774331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.774528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.787525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef0350 00:20:48.857 [2024-12-09 04:10:30.789327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:48.857 [2024-12-09 04:10:30.789362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:48.857 [2024-12-09 04:10:30.803508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eefae0 00:20:49.114 [2024-12-09 04:10:30.805672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.114 [2024-12-09 04:10:30.805706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:49.114 [2024-12-09 04:10:30.819659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eef270 00:20:49.114 [2024-12-09 04:10:30.821642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.114 [2024-12-09 04:10:30.821685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:49.114 [2024-12-09 04:10:30.835015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eeea00 00:20:49.114 [2024-12-09 04:10:30.837014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.114 [2024-12-09 04:10:30.837060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:49.114 [2024-12-09 04:10:30.849895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eee190 00:20:49.114 [2024-12-09 04:10:30.851663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.114 [2024-12-09 04:10:30.851696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.114 [2024-12-09 04:10:30.864452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eed920 00:20:49.114 [2024-12-09 04:10:30.866064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.114 [2024-12-09 04:10:30.866097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:49.114 [2024-12-09 04:10:30.879226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eed0b0 00:20:49.114 [2024-12-09 04:10:30.881347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.114 [2024-12-09 04:10:30.881380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:49.114 [2024-12-09 04:10:30.893764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eec840 00:20:49.114 [2024-12-09 04:10:30.895424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:30.895457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:30.908014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eebfd0 00:20:49.115 [2024-12-09 04:10:30.909589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:30.909636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:30.923072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eeb760 00:20:49.115 [2024-12-09 04:10:30.925117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:30.925151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:30.939574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eeaef0 00:20:49.115 [2024-12-09 04:10:30.941272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:30.941306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:30.954927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eea680 00:20:49.115 [2024-12-09 04:10:30.957048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:30.957084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:30.970251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee9e10 00:20:49.115 [2024-12-09 04:10:30.971941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:30.971976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:30.985274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee95a0 00:20:49.115 [2024-12-09 04:10:30.986898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:30.987102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:31.000688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee8d30 00:20:49.115 [2024-12-09 04:10:31.002384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:31.002417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:31.015611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee84c0 00:20:49.115 [2024-12-09 04:10:31.017243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:31.017276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:31.030468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee7c50 00:20:49.115 [2024-12-09 04:10:31.032165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:31.032237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:31.044864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee73e0 00:20:49.115 [2024-12-09 04:10:31.046326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:31.046549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:49.115 [2024-12-09 04:10:31.059279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee6b70 00:20:49.115 [2024-12-09 04:10:31.060836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.115 [2024-12-09 04:10:31.060868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.074275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee6300 00:20:49.373 [2024-12-09 04:10:31.075966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.076147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.089325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee5a90 00:20:49.373 [2024-12-09 04:10:31.091042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.091288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.103930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee5220 00:20:49.373 [2024-12-09 04:10:31.105400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.105614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:49.373 16953.00 IOPS, 66.22 MiB/s [2024-12-09T04:10:31.323Z] [2024-12-09 04:10:31.117936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee49b0 00:20:49.373 [2024-12-09 04:10:31.119348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.119550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.132208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee4140 00:20:49.373 [2024-12-09 04:10:31.133649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.133864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.146353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee38d0 00:20:49.373 [2024-12-09 04:10:31.147765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.147960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.160509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee3060 00:20:49.373 [2024-12-09 04:10:31.162171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.162391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.175605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee27f0 00:20:49.373 [2024-12-09 04:10:31.177264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.177300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.189765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee1f80 00:20:49.373 [2024-12-09 04:10:31.191097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.191131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.203639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee1710 00:20:49.373 [2024-12-09 04:10:31.204831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.204863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.218322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee0ea0 00:20:49.373 [2024-12-09 04:10:31.219630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.219662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.233429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee0630 00:20:49.373 [2024-12-09 04:10:31.235043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.373 [2024-12-09 04:10:31.235270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:49.373 [2024-12-09 04:10:31.249751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016edfdc0 00:20:49.373 [2024-12-09 04:10:31.251354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.374 [2024-12-09 04:10:31.251596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:49.374 [2024-12-09 04:10:31.264785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016edf550 00:20:49.374 [2024-12-09 04:10:31.266225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.374 [2024-12-09 04:10:31.266422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:49.374 [2024-12-09 04:10:31.279634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016edece0 00:20:49.374 [2024-12-09 04:10:31.280898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.374 [2024-12-09 04:10:31.281094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:49.374 [2024-12-09 04:10:31.294202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ede470 00:20:49.374 [2024-12-09 04:10:31.295583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.374 [2024-12-09 04:10:31.295789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.374 [2024-12-09 04:10:31.314387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eddc00 00:20:49.374 [2024-12-09 04:10:31.317433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.374 [2024-12-09 04:10:31.317668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.632 [2024-12-09 04:10:31.330254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ede470 00:20:49.632 [2024-12-09 04:10:31.332622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.632 [2024-12-09 04:10:31.332831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:49.632 [2024-12-09 04:10:31.345260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016edece0 00:20:49.632 [2024-12-09 04:10:31.348218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.632 [2024-12-09 04:10:31.348429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:49.632 [2024-12-09 04:10:31.359912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016edf550 00:20:49.632 [2024-12-09 04:10:31.362098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.632 [2024-12-09 04:10:31.362374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:49.632 [2024-12-09 04:10:31.374201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016edfdc0 00:20:49.632 [2024-12-09 04:10:31.376442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.632 [2024-12-09 04:10:31.376615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:49.632 [2024-12-09 04:10:31.389864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee0630 00:20:49.632 [2024-12-09 04:10:31.392150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.632 [2024-12-09 04:10:31.392211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:49.632 [2024-12-09 04:10:31.404854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee0ea0 00:20:49.632 [2024-12-09 04:10:31.406913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.632 [2024-12-09 04:10:31.407124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:49.632 [2024-12-09 04:10:31.419015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee1710 00:20:49.632 [2024-12-09 04:10:31.421162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.632 [2024-12-09 04:10:31.421379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:49.632 [2024-12-09 04:10:31.433030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee1f80 00:20:49.632 [2024-12-09 04:10:31.435256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.632 [2024-12-09 04:10:31.435466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.447181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee27f0 00:20:49.633 [2024-12-09 04:10:31.449310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.449516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.460934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee3060 00:20:49.633 [2024-12-09 04:10:31.463150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.463413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.475951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee38d0 00:20:49.633 [2024-12-09 04:10:31.478012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.478272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.490012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee4140 00:20:49.633 [2024-12-09 04:10:31.492165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.492381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.505234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee49b0 00:20:49.633 [2024-12-09 04:10:31.507334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.507376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.519527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee5220 00:20:49.633 [2024-12-09 04:10:31.521371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.521405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.532947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee5a90 00:20:49.633 [2024-12-09 04:10:31.534858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.535048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.546705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee6300 00:20:49.633 [2024-12-09 04:10:31.548634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.548671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.560133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee6b70 00:20:49.633 [2024-12-09 04:10:31.561926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.561958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:49.633 [2024-12-09 04:10:31.573438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee73e0 00:20:49.633 [2024-12-09 04:10:31.575501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.633 [2024-12-09 04:10:31.575535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:49.891 [2024-12-09 04:10:31.588138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee7c50 00:20:49.891 [2024-12-09 04:10:31.590804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.590996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.603656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee84c0 00:20:49.892 [2024-12-09 04:10:31.605479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.605512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.617462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee8d30 00:20:49.892 [2024-12-09 04:10:31.619425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.619460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.631761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee95a0 00:20:49.892 [2024-12-09 04:10:31.633579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.633611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.645687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ee9e10 00:20:49.892 [2024-12-09 04:10:31.647541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.647574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.659511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eea680 00:20:49.892 [2024-12-09 04:10:31.661255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.661287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.673343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eeaef0 00:20:49.892 [2024-12-09 04:10:31.675122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.675156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.687137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eeb760 00:20:49.892 [2024-12-09 04:10:31.689293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.689323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.701676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eebfd0 00:20:49.892 [2024-12-09 04:10:31.703464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.703498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.715573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eec840 00:20:49.892 [2024-12-09 04:10:31.717251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.717283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.729888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eed0b0 00:20:49.892 [2024-12-09 04:10:31.731753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.731785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.744085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eed920 00:20:49.892 [2024-12-09 04:10:31.745762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.745793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.757849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eee190 00:20:49.892 [2024-12-09 04:10:31.759746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.759778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.771756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eeea00 00:20:49.892 [2024-12-09 04:10:31.773303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.773336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.785078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eef270 00:20:49.892 [2024-12-09 04:10:31.786661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.786694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.798530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016eefae0 00:20:49.892 [2024-12-09 04:10:31.800528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.800560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.813417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef0350 00:20:49.892 [2024-12-09 04:10:31.815283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.815316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:49.892 [2024-12-09 04:10:31.827876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef0bc0 00:20:49.892 [2024-12-09 04:10:31.829468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.892 [2024-12-09 04:10:31.829500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:50.150 [2024-12-09 04:10:31.842830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef1430 00:20:50.150 [2024-12-09 04:10:31.844812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.844843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.857741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef1ca0 00:20:50.151 [2024-12-09 04:10:31.859532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.859565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.872018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef2510 00:20:50.151 [2024-12-09 04:10:31.873429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.873461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.885422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef2d80 00:20:50.151 [2024-12-09 04:10:31.887021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.887065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.899067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef35f0 00:20:50.151 [2024-12-09 04:10:31.900626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.900662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.912887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef3e60 00:20:50.151 [2024-12-09 04:10:31.914310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.914527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.926478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef46d0 00:20:50.151 [2024-12-09 04:10:31.928012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.928047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.940221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef4f40 00:20:50.151 [2024-12-09 04:10:31.941551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.941584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.953418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef57b0 00:20:50.151 [2024-12-09 04:10:31.955065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.955100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.967130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef6020 00:20:50.151 [2024-12-09 04:10:31.968434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.968467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.980492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef6890 00:20:50.151 [2024-12-09 04:10:31.981748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.981778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:31.993642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef7100 00:20:50.151 [2024-12-09 04:10:31.995051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:31.995275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:32.007654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef7970 00:20:50.151 [2024-12-09 04:10:32.008879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:32.008912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:32.021811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef81e0 00:20:50.151 [2024-12-09 04:10:32.023387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:32.023420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:32.037063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef8a50 00:20:50.151 [2024-12-09 04:10:32.038643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:32.038684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:32.051939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef92c0 00:20:50.151 [2024-12-09 04:10:32.053338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:32.053370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:32.066611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016ef9b30 00:20:50.151 [2024-12-09 04:10:32.068015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:32.068049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:32.080759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efa3a0 00:20:50.151 [2024-12-09 04:10:32.082578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:32.082788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:50.151 [2024-12-09 04:10:32.096197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efac10 00:20:50.151 [2024-12-09 04:10:32.097973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.151 [2024-12-09 04:10:32.098211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:50.409 [2024-12-09 04:10:32.112300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4b70) with pdu=0x200016efb480 00:20:50.409 [2024-12-09 04:10:32.113829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:50.409 [2024-12-09 04:10:32.114046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:50.409 17331.50 IOPS, 67.70 MiB/s 00:20:50.409 Latency(us) 00:20:50.409 [2024-12-09T04:10:32.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.409 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:50.409 nvme0n1 : 2.01 17334.05 67.71 0.00 0.00 7369.94 4617.31 28955.00 00:20:50.409 [2024-12-09T04:10:32.359Z] =================================================================================================================== 00:20:50.409 [2024-12-09T04:10:32.359Z] Total : 17334.05 67.71 0.00 0.00 7369.94 4617.31 28955.00 00:20:50.409 { 00:20:50.409 "results": [ 00:20:50.409 { 00:20:50.409 "job": "nvme0n1", 00:20:50.409 "core_mask": "0x2", 00:20:50.409 "workload": "randwrite", 00:20:50.409 "status": "finished", 00:20:50.409 "queue_depth": 128, 00:20:50.409 "io_size": 4096, 00:20:50.409 "runtime": 2.008302, 00:20:50.409 "iops": 17334.04637350359, 00:20:50.409 "mibps": 67.71111864649839, 00:20:50.409 "io_failed": 0, 00:20:50.409 "io_timeout": 0, 00:20:50.409 "avg_latency_us": 7369.94020619849, 00:20:50.409 "min_latency_us": 4617.309090909091, 00:20:50.409 "max_latency_us": 28954.996363636365 00:20:50.409 } 00:20:50.409 ], 00:20:50.409 "core_count": 1 00:20:50.409 } 00:20:50.409 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:50.409 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:50.409 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:50.409 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:50.409 | .driver_specific 00:20:50.409 | .nvme_error 00:20:50.409 | .status_code 00:20:50.409 | .command_transient_transport_error' 00:20:50.667 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:20:50.667 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80943 00:20:50.667 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80943 ']' 00:20:50.667 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80943 00:20:50.667 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:50.667 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.668 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80943 00:20:50.668 killing process with pid 80943 00:20:50.668 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.668 00:20:50.668 Latency(us) 00:20:50.668 [2024-12-09T04:10:32.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.668 [2024-12-09T04:10:32.618Z] =================================================================================================================== 00:20:50.668 [2024-12-09T04:10:32.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.668 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.668 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.668 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80943' 00:20:50.668 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80943 00:20:50.668 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80943 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=81009 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 81009 /var/tmp/bperf.sock 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 81009 ']' 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.925 04:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.925 [2024-12-09 04:10:32.789991] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:50.925 [2024-12-09 04:10:32.790370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81009 ] 00:20:50.925 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:50.925 Zero copy mechanism will not be used. 00:20:51.182 [2024-12-09 04:10:32.938045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.182 [2024-12-09 04:10:32.994719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.182 [2024-12-09 04:10:33.066590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.439 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:52.007 nvme0n1 00:20:52.007 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:52.007 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.007 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:52.007 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.007 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:52.007 04:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:52.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:52.007 Zero copy mechanism will not be used. 00:20:52.007 Running I/O for 2 seconds... 00:20:52.007 [2024-12-09 04:10:33.833172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.833569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.833930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.839884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.840086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.840110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.845610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.845832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.845866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.851423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.851588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.851611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.857350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.857488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.857511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.862621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.862948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.862970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.868270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.868455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.868477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.873892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.874054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.874076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.879753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.879877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.879899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.007 [2024-12-09 04:10:33.885754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.007 [2024-12-09 04:10:33.885936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.007 [2024-12-09 04:10:33.885958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.891423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.891613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.891636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.896781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.896939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.896961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.902233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.902480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.902502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.907731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.907872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.907894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.913207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.913481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.913530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.918614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.918789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.918810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.923952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.924200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.924221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.929243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.929404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.929425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.934283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.934499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.934521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.939574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.939779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.939800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.944753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.944906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.944927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.008 [2024-12-09 04:10:33.950468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.008 [2024-12-09 04:10:33.950649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.008 [2024-12-09 04:10:33.950670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:33.956108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:33.956333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:33.956363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:33.961739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:33.962009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:33.962030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:33.967849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:33.968026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:33.968057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:33.975693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:33.975805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:33.975827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:33.981897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:33.982259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:33.982283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:33.987653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:33.987806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:33.987827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:33.992956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:33.993089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:33.993109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:33.998279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:33.998465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:33.998501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.003557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.003716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.003737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.008673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.008859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.008881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.014345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.014543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.014564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.020086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.020266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.020288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.025802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.026072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.026093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.031875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.032060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.032080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.037235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.037440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.037477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.042526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.042708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.042728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.047672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.047816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.047838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.052789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.052974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.052995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.058010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.058292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.058315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.063805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.063966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.063987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.069292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.069485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.069507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.075191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.075434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.075457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.080544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.080783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.080805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.086237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.086539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.086855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.277 [2024-12-09 04:10:34.091662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.277 [2024-12-09 04:10:34.091979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.277 [2024-12-09 04:10:34.092174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.096879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.097120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.097398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.102257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.102596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.102884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.107837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.108121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.108365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.113109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.113401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.113659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.118746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.118931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.118955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.124367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.124561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.124583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.129720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.129987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.130020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.135331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.135522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.135544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.140742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.140925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.140945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.146005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.146376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.146399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.151418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.151607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.151628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.156684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.156854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.156876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.161743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.161972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.161994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.167527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.167745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.167766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.173008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.173172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.173236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.178509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.178656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.178682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.183635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.183819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.183840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.188827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.188995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.189016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.193938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.194234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.194257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.199356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.199537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.199558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.204657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.204829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.204851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.209928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.210270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.210293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.215499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.215679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.215700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.278 [2024-12-09 04:10:34.221080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.278 [2024-12-09 04:10:34.221282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.278 [2024-12-09 04:10:34.221304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.537 [2024-12-09 04:10:34.227042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.537 [2024-12-09 04:10:34.227170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.537 [2024-12-09 04:10:34.227191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.537 [2024-12-09 04:10:34.232573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.537 [2024-12-09 04:10:34.232774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.537 [2024-12-09 04:10:34.232795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.537 [2024-12-09 04:10:34.237892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.537 [2024-12-09 04:10:34.238188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.537 [2024-12-09 04:10:34.238211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.537 [2024-12-09 04:10:34.243581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.537 [2024-12-09 04:10:34.243783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.537 [2024-12-09 04:10:34.243804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.537 [2024-12-09 04:10:34.248800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.537 [2024-12-09 04:10:34.248993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.537 [2024-12-09 04:10:34.249013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.537 [2024-12-09 04:10:34.253894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.537 [2024-12-09 04:10:34.254155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.254177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.259177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.259387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.259408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.264128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.264396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.264431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.269131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.269363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.269384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.274475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.274632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.274664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.280118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.280303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.280352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.285620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.285857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.285879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.291563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.291740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.291761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.296945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.297248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.297270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.302753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.302898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.302919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.308106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.308316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.308338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.313212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.313347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.313367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.318367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.318627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.318657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.323435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.323642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.323663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.328700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.328918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.328941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.334089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.334338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.334361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.339370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.339562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.339583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.344849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.345101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.345123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.350655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.350807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.350829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.355805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.355987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.356008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.360883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.361156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.361180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.366196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.366369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.366390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.371142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.371314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.371335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.376435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.376594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.376615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.381888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.382099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.382188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.387086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.387309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.387332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.392281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.392468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.392489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.397331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.397486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.397507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.402303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.402571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.402597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.407319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.407515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.407536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.412341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.412531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.412552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.417494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.417684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.417705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.422852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.423118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.423140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.428707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.428882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.434498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.434725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.434746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.440367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.538 [2024-12-09 04:10:34.440535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.538 [2024-12-09 04:10:34.440556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.538 [2024-12-09 04:10:34.445920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.539 [2024-12-09 04:10:34.446153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.539 [2024-12-09 04:10:34.446193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.539 [2024-12-09 04:10:34.451922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.539 [2024-12-09 04:10:34.452091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.539 [2024-12-09 04:10:34.452141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.539 [2024-12-09 04:10:34.457737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.539 [2024-12-09 04:10:34.458081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.539 [2024-12-09 04:10:34.458111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.539 [2024-12-09 04:10:34.463785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.539 [2024-12-09 04:10:34.464051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.539 [2024-12-09 04:10:34.464073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.539 [2024-12-09 04:10:34.469417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.539 [2024-12-09 04:10:34.469696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.539 [2024-12-09 04:10:34.469723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.539 [2024-12-09 04:10:34.475188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.539 [2024-12-09 04:10:34.475647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.539 [2024-12-09 04:10:34.475698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.539 [2024-12-09 04:10:34.480813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.539 [2024-12-09 04:10:34.481078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.539 [2024-12-09 04:10:34.481104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.797 [2024-12-09 04:10:34.486911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.797 [2024-12-09 04:10:34.487375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.797 [2024-12-09 04:10:34.487407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.797 [2024-12-09 04:10:34.493174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.797 [2024-12-09 04:10:34.493512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.797 [2024-12-09 04:10:34.493544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.797 [2024-12-09 04:10:34.498671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.797 [2024-12-09 04:10:34.499102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.797 [2024-12-09 04:10:34.499133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.797 [2024-12-09 04:10:34.504298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.797 [2024-12-09 04:10:34.504625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.797 [2024-12-09 04:10:34.504683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.797 [2024-12-09 04:10:34.509670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.510084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.510108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.515733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.516001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.516023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.521533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.521802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.521823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.526830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.527126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.527154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.532243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.532579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.532606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.537481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.537755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.537776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.542732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.542996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.543018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.548044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.548478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.548506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.553424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.553732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.553762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.558775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.559074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.559113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.563991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.564447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.564481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.569521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.569820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.569848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.575044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.575410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.575444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.580395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.580684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.580714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.585627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.585895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.585922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.591159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.591497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.591522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.596410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.596732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.596761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.602630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.602922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.602944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.607888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.608154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.608191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.613250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.613563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.613611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.618203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.618529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.618556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.623146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.623462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.623494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.628233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.628565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.628652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.633398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.633661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.633692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.638481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.638755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.638776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.643421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.643683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.643705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.648320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.648604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.648625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.653220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.653484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.653506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.658083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.658396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.658433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.663104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.663378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.663405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.668024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.668333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.668355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.673019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.673456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.673482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.798 [2024-12-09 04:10:34.678242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.798 [2024-12-09 04:10:34.678528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.798 [2024-12-09 04:10:34.678555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.683174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.683480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.683504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.688077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.688434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.688463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.693092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.693525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.693549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.698264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.698543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.698597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.703233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.703497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.703524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.708691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.709079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.709103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.713515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.713587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.713609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.718487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.718560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.718582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.723444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.723516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.723537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.728238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.728327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.728348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.733362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.733437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.733459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.738494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.738572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.738593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.799 [2024-12-09 04:10:34.743814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:52.799 [2024-12-09 04:10:34.743887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.799 [2024-12-09 04:10:34.743908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.749066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.749356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.749379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.754509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.754619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.754641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.759540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.759636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.759658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.764498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.764566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.764603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.769473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.769542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.769563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.774930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.775027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.775049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.780307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.780399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.780420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.785006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.785075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.785097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.790525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.790606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.790628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.795854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.796054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.796076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.800995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.801069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.801091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.805885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.805966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.805987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.810860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.810936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.810957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.815613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.815813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.815836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.820779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.820991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.821172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.825935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.826204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.826378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.059 5724.00 IOPS, 715.50 MiB/s [2024-12-09T04:10:35.009Z] [2024-12-09 04:10:34.832420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.832653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.832820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.837585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.837828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.837992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.842911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.843158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.843450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.847792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.848109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.848371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.852304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.852536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.852707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.857123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.857369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.059 [2024-12-09 04:10:34.857531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.059 [2024-12-09 04:10:34.862274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.059 [2024-12-09 04:10:34.862565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.862748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.867385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.867712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.867984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.873695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.873973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.874190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.879084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.879453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.879690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.883537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.883840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.884120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.888014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.888348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.888689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.893085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.893210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.893251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.897779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.897888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.897909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.902921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.903179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.903217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.908285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.908434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.908456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.913202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.913319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.913340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.917927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.918029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.918057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.923082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.923349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.923371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.927973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.928095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.928116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.932701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.932783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.932803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.937416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.937539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.937560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.942014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.942207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.942245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.946358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.946554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.946575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.950615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.950932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.950954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.955070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.955388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.955410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.959390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.959469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.959489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.963571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.963653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.963674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.967942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.968060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.968080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.972163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.972306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.972326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.976438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.976513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.976533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.980525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.980610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.980630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.984636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.984759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.984779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.988773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.060 [2024-12-09 04:10:34.988866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.060 [2024-12-09 04:10:34.988886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.060 [2024-12-09 04:10:34.993013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.061 [2024-12-09 04:10:34.993093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.061 [2024-12-09 04:10:34.993114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.061 [2024-12-09 04:10:34.997228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.061 [2024-12-09 04:10:34.997307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.061 [2024-12-09 04:10:34.997329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.061 [2024-12-09 04:10:35.001548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.061 [2024-12-09 04:10:35.001641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.061 [2024-12-09 04:10:35.001662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.006477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.006666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.006687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.011306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.011462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.011482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.015873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.016020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.016042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.020146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.020317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.020340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.024378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.024448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.024469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.028688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.028771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.028792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.033606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.033691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.033712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.038349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.038468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.038501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.042735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.042970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.042991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.047313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.047404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.047424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.051579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.051681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.051702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.055784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.319 [2024-12-09 04:10:35.055893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.319 [2024-12-09 04:10:35.055914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.319 [2024-12-09 04:10:35.060077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.060153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.060174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.064312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.064390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.064411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.068489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.068577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.068597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.072777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.072874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.072895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.077131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.077243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.077264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.081534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.081621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.081641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.085743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.085837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.085858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.089929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.090027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.090048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.094292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.094393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.094416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.098580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.098804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.098838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.103014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.103305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.103327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.107480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.107566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.107587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.111722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.111849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.111870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.116066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.116143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.116163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.120271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.120418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.120438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.124552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.124693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.124714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.128822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.128959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.128980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.133040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.133174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.133223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.137313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.137454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.137475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.141422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.141496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.141517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.145629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.145708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.145729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.149777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.149845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.149865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.154012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.154089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.154109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.158167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.158284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.158306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.162352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.162428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.162464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.166549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.166801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.166834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.171029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.171305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.171545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.175673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.175920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.176097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.181743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.182085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.182431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.187792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.188073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.188283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.192639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.192880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.193131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.197459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.197691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.197895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.202609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.202933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.203104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.207444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.207575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.207597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.212019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.212204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.212240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.216709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.216888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.216909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.221199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.221274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.221295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.320 [2024-12-09 04:10:35.225916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.320 [2024-12-09 04:10:35.225995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.320 [2024-12-09 04:10:35.226015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.230540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.230651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.230671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.234987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.235062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.235083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.239371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.239450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.239471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.243865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.243968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.243989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.248108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.248258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.248280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.252479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.252554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.252575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.256838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.257052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.257074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.261550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.261687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.261708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.321 [2024-12-09 04:10:35.266371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.321 [2024-12-09 04:10:35.266464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.321 [2024-12-09 04:10:35.266485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.271172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.271282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.271303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.275923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.276019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.276039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.280314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.280416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.280436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.284636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.284837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.284858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.289252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.289336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.289356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.293779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.293857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.293877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.298114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.298253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.298274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.302460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.302587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.302608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.306839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.306910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.306931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.311335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.311406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.311427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.315609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.315677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.315698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.319989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.320075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.320096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.324384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.324478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.324499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.328968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.329205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.329239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.333550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.333624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.333644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.338027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.338197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.338234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.579 [2024-12-09 04:10:35.342883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.579 [2024-12-09 04:10:35.342960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.579 [2024-12-09 04:10:35.342980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.347784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.347887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.347908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.352063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.352146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.352167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.356731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.356941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.356962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.361208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.361296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.361317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.365523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.365612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.365632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.370050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.370176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.370197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.374448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.374595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.374626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.378761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.378835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.378856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.382923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.383066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.383086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.387185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.387323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.387344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.391346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.391484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.391504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.395515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.395610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.395631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.399708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.399785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.399805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.403895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.404031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.404052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.408021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.408115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.408135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.412600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.412833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.412855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.418384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.418692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.423751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.423827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.423848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.428418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.428513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.428533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.432663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.432909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.432931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.437080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.437303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.437324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.441501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.441587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.441607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.445743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.445836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.445856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.449972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.450067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.450087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.454217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.454310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.454332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.458480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.458600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.458620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.580 [2024-12-09 04:10:35.462682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.580 [2024-12-09 04:10:35.462764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.580 [2024-12-09 04:10:35.462785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.466921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.467011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.467031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.471144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.471347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.471369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.475408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.475532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.475569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.479717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.479829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.479850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.484031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.484103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.484124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.488230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.488304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.488324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.492454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.492595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.492616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.496628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.496707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.496728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.500787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.500909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.500929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.504957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.505036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.505056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.509168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.509324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.509346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.513418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.513580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.513601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.517517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.517595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.517615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.521716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.521797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.521818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.581 [2024-12-09 04:10:35.526546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.581 [2024-12-09 04:10:35.526645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.581 [2024-12-09 04:10:35.526666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.839 [2024-12-09 04:10:35.531134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.839 [2024-12-09 04:10:35.531248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.839 [2024-12-09 04:10:35.531277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.839 [2024-12-09 04:10:35.535838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.839 [2024-12-09 04:10:35.535915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.839 [2024-12-09 04:10:35.535935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.839 [2024-12-09 04:10:35.540284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.839 [2024-12-09 04:10:35.540510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.839 [2024-12-09 04:10:35.540668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.839 [2024-12-09 04:10:35.544743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.839 [2024-12-09 04:10:35.544817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.839 [2024-12-09 04:10:35.544838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.839 [2024-12-09 04:10:35.549036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.839 [2024-12-09 04:10:35.549113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.839 [2024-12-09 04:10:35.549135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.839 [2024-12-09 04:10:35.553268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.839 [2024-12-09 04:10:35.553347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.839 [2024-12-09 04:10:35.553368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.557554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.557627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.557647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.561713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.561790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.561810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.565820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.565964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.565984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.569966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.570051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.570071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.574242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.574432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.574479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.578544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.578684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.578704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.582798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.582973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.582994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.587163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.587410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.587434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.591481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.591553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.591605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.595962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.596227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.596250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.600490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.600620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.600641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.605073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.605255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.605278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.609782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.609922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.609943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.614646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.614824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.614844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.619778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.620148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.620170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.625008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.625087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.625108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.630324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.630407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.630431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.635451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.635535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.635574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.640037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.640318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.640340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.644823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.644899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.644920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.649527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.649661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.649682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.654084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.654191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.654230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.658444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.658536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.658557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.662781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.662856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.662876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.667098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.667180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.667229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.671621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.671858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.671879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.840 [2024-12-09 04:10:35.676370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.840 [2024-12-09 04:10:35.676447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.840 [2024-12-09 04:10:35.676467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.680699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.680785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.680805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.685184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.685266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.685287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.689704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.689779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.689812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.694917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.694997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.695018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.699879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.700166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.700203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.704575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.704672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.704708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.709508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.709610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.709630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.714766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.714912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.714933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.719403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.719487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.719508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.723753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.723835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.723855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.728426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.728614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.728635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.733219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.733334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.733356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.737953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.738081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.738103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.742787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.743099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.743121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.747967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.748233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.748255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.752673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.752951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.752997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.757363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.757593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.757613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.762333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.762461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.762513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.767268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.767480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.767503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.773111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.773272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.773294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.778460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.778555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.778587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.841 [2024-12-09 04:10:35.783813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:53.841 [2024-12-09 04:10:35.783906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.841 [2024-12-09 04:10:35.783927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:54.099 [2024-12-09 04:10:35.789251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.789327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-12-09 04:10:35.789348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:54.099 [2024-12-09 04:10:35.794641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.794823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-12-09 04:10:35.794844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:54.099 [2024-12-09 04:10:35.800110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.800289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-12-09 04:10:35.800337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:54.099 [2024-12-09 04:10:35.805241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.805333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-12-09 04:10:35.805354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:54.099 [2024-12-09 04:10:35.810423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.810549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-12-09 04:10:35.810570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:54.099 [2024-12-09 04:10:35.815622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.815728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-12-09 04:10:35.815750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:54.099 [2024-12-09 04:10:35.820567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.820679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-12-09 04:10:35.820710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:54.099 [2024-12-09 04:10:35.825418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.825504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.099 [2024-12-09 04:10:35.825525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:54.099 6244.00 IOPS, 780.50 MiB/s [2024-12-09T04:10:36.049Z] [2024-12-09 04:10:35.831339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15e4d10) with pdu=0x200016eff3c8 00:20:54.099 [2024-12-09 04:10:35.831565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.100 [2024-12-09 04:10:35.831586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:54.100 00:20:54.100 Latency(us) 00:20:54.100 [2024-12-09T04:10:36.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.100 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:54.100 nvme0n1 : 2.00 6241.24 780.16 0.00 0.00 2558.15 1660.74 7536.64 00:20:54.100 [2024-12-09T04:10:36.050Z] =================================================================================================================== 00:20:54.100 [2024-12-09T04:10:36.050Z] Total : 6241.24 780.16 0.00 0.00 2558.15 1660.74 7536.64 00:20:54.100 { 00:20:54.100 "results": [ 00:20:54.100 { 00:20:54.100 "job": "nvme0n1", 00:20:54.100 "core_mask": "0x2", 00:20:54.100 "workload": "randwrite", 00:20:54.100 "status": "finished", 00:20:54.100 "queue_depth": 16, 00:20:54.100 "io_size": 131072, 00:20:54.100 "runtime": 2.003447, 00:20:54.100 "iops": 6241.243217314957, 00:20:54.100 "mibps": 780.1554021643697, 00:20:54.100 "io_failed": 0, 00:20:54.100 "io_timeout": 0, 00:20:54.100 "avg_latency_us": 2558.1483068690745, 00:20:54.100 "min_latency_us": 1660.7418181818182, 00:20:54.100 "max_latency_us": 7536.64 00:20:54.100 } 00:20:54.100 ], 00:20:54.100 "core_count": 1 00:20:54.100 } 00:20:54.100 04:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:54.100 04:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:54.100 04:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:54.100 | .driver_specific 00:20:54.100 | .nvme_error 00:20:54.100 | .status_code 00:20:54.100 | .command_transient_transport_error' 00:20:54.100 04:10:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:54.357 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 404 > 0 )) 00:20:54.357 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 81009 00:20:54.357 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 81009 ']' 00:20:54.357 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 81009 00:20:54.357 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:54.357 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.357 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81009 00:20:54.357 killing process with pid 81009 00:20:54.357 Received shutdown signal, test time was about 2.000000 seconds 00:20:54.357 00:20:54.357 Latency(us) 00:20:54.357 [2024-12-09T04:10:36.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.357 [2024-12-09T04:10:36.308Z] =================================================================================================================== 00:20:54.358 [2024-12-09T04:10:36.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.358 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:54.358 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:54.358 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81009' 00:20:54.358 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 81009 00:20:54.358 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 81009 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80796 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80796 ']' 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80796 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80796 00:20:54.616 killing process with pid 80796 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80796' 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80796 00:20:54.616 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80796 00:20:54.874 ************************************ 00:20:54.874 END TEST nvmf_digest_error 00:20:54.874 ************************************ 00:20:54.874 00:20:54.874 real 0m18.120s 00:20:54.874 user 0m33.998s 00:20:54.874 sys 0m5.741s 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.874 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.874 rmmod nvme_tcp 00:20:54.874 rmmod nvme_fabrics 00:20:54.874 rmmod nvme_keyring 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:55.132 Process with pid 80796 is not found 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80796 ']' 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80796 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80796 ']' 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80796 00:20:55.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80796) - No such process 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80796 is not found' 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:55.132 04:10:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:55.132 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:55.132 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.132 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.132 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:55.132 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.132 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.132 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:55.389 00:20:55.389 real 0m35.664s 00:20:55.389 user 1m5.118s 00:20:55.389 sys 0m11.758s 00:20:55.389 ************************************ 00:20:55.389 END TEST nvmf_digest 00:20:55.389 ************************************ 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.389 04:10:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.390 ************************************ 00:20:55.390 START TEST nvmf_host_multipath 00:20:55.390 ************************************ 00:20:55.390 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:55.390 * Looking for test storage... 00:20:55.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:55.390 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:55.390 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:20:55.390 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.648 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.649 --rc genhtml_branch_coverage=1 00:20:55.649 --rc genhtml_function_coverage=1 00:20:55.649 --rc genhtml_legend=1 00:20:55.649 --rc geninfo_all_blocks=1 00:20:55.649 --rc geninfo_unexecuted_blocks=1 00:20:55.649 00:20:55.649 ' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.649 --rc genhtml_branch_coverage=1 00:20:55.649 --rc genhtml_function_coverage=1 00:20:55.649 --rc genhtml_legend=1 00:20:55.649 --rc geninfo_all_blocks=1 00:20:55.649 --rc geninfo_unexecuted_blocks=1 00:20:55.649 00:20:55.649 ' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.649 --rc genhtml_branch_coverage=1 00:20:55.649 --rc genhtml_function_coverage=1 00:20:55.649 --rc genhtml_legend=1 00:20:55.649 --rc geninfo_all_blocks=1 00:20:55.649 --rc geninfo_unexecuted_blocks=1 00:20:55.649 00:20:55.649 ' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:55.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.649 --rc genhtml_branch_coverage=1 00:20:55.649 --rc genhtml_function_coverage=1 00:20:55.649 --rc genhtml_legend=1 00:20:55.649 --rc geninfo_all_blocks=1 00:20:55.649 --rc geninfo_unexecuted_blocks=1 00:20:55.649 00:20:55.649 ' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.649 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.649 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:55.650 Cannot find device "nvmf_init_br" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:55.650 Cannot find device "nvmf_init_br2" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:55.650 Cannot find device "nvmf_tgt_br" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.650 Cannot find device "nvmf_tgt_br2" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:55.650 Cannot find device "nvmf_init_br" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:55.650 Cannot find device "nvmf_init_br2" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:55.650 Cannot find device "nvmf_tgt_br" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:55.650 Cannot find device "nvmf_tgt_br2" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:55.650 Cannot find device "nvmf_br" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:55.650 Cannot find device "nvmf_init_if" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:55.650 Cannot find device "nvmf_init_if2" 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.650 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.908 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:55.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:55.909 00:20:55.909 --- 10.0.0.3 ping statistics --- 00:20:55.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.909 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:55.909 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:55.909 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:20:55.909 00:20:55.909 --- 10.0.0.4 ping statistics --- 00:20:55.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.909 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:55.909 00:20:55.909 --- 10.0.0.1 ping statistics --- 00:20:55.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.909 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:55.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:55.909 00:20:55.909 --- 10.0.0.2 ping statistics --- 00:20:55.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.909 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81323 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81323 00:20:55.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81323 ']' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.909 04:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:55.909 [2024-12-09 04:10:37.845399] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:20:55.909 [2024-12-09 04:10:37.845510] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.166 [2024-12-09 04:10:37.987581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:56.167 [2024-12-09 04:10:38.037572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.167 [2024-12-09 04:10:38.037907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.167 [2024-12-09 04:10:38.038078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.167 [2024-12-09 04:10:38.038285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.167 [2024-12-09 04:10:38.038331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.167 [2024-12-09 04:10:38.043218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.167 [2024-12-09 04:10:38.043242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.167 [2024-12-09 04:10:38.099820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.424 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.424 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:56.424 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.424 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.424 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:56.424 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.424 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81323 00:20:56.424 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:56.681 [2024-12-09 04:10:38.499360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.681 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:56.939 Malloc0 00:20:56.939 04:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:57.210 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.468 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:57.726 [2024-12-09 04:10:39.606742] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:57.726 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:57.983 [2024-12-09 04:10:39.830926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:57.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81371 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81371 /var/tmp/bdevperf.sock 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81371 ']' 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.983 04:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:58.548 04:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.549 04:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:58.549 04:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:58.549 04:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:59.113 Nvme0n1 00:20:59.113 04:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:59.386 Nvme0n1 00:20:59.386 04:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:59.386 04:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:00.319 04:10:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:00.319 04:10:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:00.577 04:10:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:00.835 04:10:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:00.835 04:10:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81323 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:00.835 04:10:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81403 00:21:00.835 04:10:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.419 Attaching 4 probes... 00:21:07.419 @path[10.0.0.3, 4421]: 17920 00:21:07.419 @path[10.0.0.3, 4421]: 18378 00:21:07.419 @path[10.0.0.3, 4421]: 18137 00:21:07.419 @path[10.0.0.3, 4421]: 18296 00:21:07.419 @path[10.0.0.3, 4421]: 18056 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81403 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:07.419 04:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:07.419 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:07.678 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:07.678 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81522 00:21:07.678 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81323 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:07.678 04:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:14.242 Attaching 4 probes... 00:21:14.242 @path[10.0.0.3, 4420]: 17916 00:21:14.242 @path[10.0.0.3, 4420]: 18688 00:21:14.242 @path[10.0.0.3, 4420]: 17704 00:21:14.242 @path[10.0.0.3, 4420]: 17291 00:21:14.242 @path[10.0.0.3, 4420]: 16729 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81522 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:14.242 04:10:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:14.242 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:14.500 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:14.500 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81323 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:14.500 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81639 00:21:14.500 04:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.094 Attaching 4 probes... 00:21:21.094 @path[10.0.0.3, 4421]: 14060 00:21:21.094 @path[10.0.0.3, 4421]: 17939 00:21:21.094 @path[10.0.0.3, 4421]: 18060 00:21:21.094 @path[10.0.0.3, 4421]: 17669 00:21:21.094 @path[10.0.0.3, 4421]: 17789 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81639 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:21.094 04:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:21.353 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:21.353 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81747 00:21:21.353 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81323 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:21.354 04:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:27.922 Attaching 4 probes... 00:21:27.922 00:21:27.922 00:21:27.922 00:21:27.922 00:21:27.922 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81747 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:27.922 04:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:28.183 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:28.183 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81323 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:28.183 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81865 00:21:28.183 04:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.745 Attaching 4 probes... 00:21:34.745 @path[10.0.0.3, 4421]: 19917 00:21:34.745 @path[10.0.0.3, 4421]: 20214 00:21:34.745 @path[10.0.0.3, 4421]: 20119 00:21:34.745 @path[10.0.0.3, 4421]: 20381 00:21:34.745 @path[10.0.0.3, 4421]: 20188 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81865 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:34.745 04:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:35.740 04:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:35.740 04:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81989 00:21:35.740 04:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81323 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:35.740 04:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:42.326 Attaching 4 probes... 00:21:42.326 @path[10.0.0.3, 4420]: 19345 00:21:42.326 @path[10.0.0.3, 4420]: 19636 00:21:42.326 @path[10.0.0.3, 4420]: 19525 00:21:42.326 @path[10.0.0.3, 4420]: 19457 00:21:42.326 @path[10.0.0.3, 4420]: 19228 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81989 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:42.326 04:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:42.326 [2024-12-09 04:11:24.259266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:42.584 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:42.843 04:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:49.423 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:49.423 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82163 00:21:49.423 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81323 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:49.423 04:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:54.686 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:54.686 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.944 Attaching 4 probes... 00:21:54.944 @path[10.0.0.3, 4421]: 17789 00:21:54.944 @path[10.0.0.3, 4421]: 17812 00:21:54.944 @path[10.0.0.3, 4421]: 18171 00:21:54.944 @path[10.0.0.3, 4421]: 17826 00:21:54.944 @path[10.0.0.3, 4421]: 18488 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82163 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81371 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81371 ']' 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81371 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.944 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81371 00:21:55.203 killing process with pid 81371 00:21:55.204 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:55.204 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:55.204 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81371' 00:21:55.204 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81371 00:21:55.204 04:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81371 00:21:55.204 { 00:21:55.204 "results": [ 00:21:55.204 { 00:21:55.204 "job": "Nvme0n1", 00:21:55.204 "core_mask": "0x4", 00:21:55.204 "workload": "verify", 00:21:55.204 "status": "terminated", 00:21:55.204 "verify_range": { 00:21:55.204 "start": 0, 00:21:55.204 "length": 16384 00:21:55.204 }, 00:21:55.204 "queue_depth": 128, 00:21:55.204 "io_size": 4096, 00:21:55.204 "runtime": 55.628094, 00:21:55.204 "iops": 7973.489079097335, 00:21:55.204 "mibps": 31.146441715223965, 00:21:55.204 "io_failed": 0, 00:21:55.204 "io_timeout": 0, 00:21:55.204 "avg_latency_us": 16027.115399530647, 00:21:55.204 "min_latency_us": 714.9381818181819, 00:21:55.204 "max_latency_us": 7046430.72 00:21:55.204 } 00:21:55.204 ], 00:21:55.204 "core_count": 1 00:21:55.204 } 00:21:55.484 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81371 00:21:55.484 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:55.484 [2024-12-09 04:10:39.898906] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:21:55.484 [2024-12-09 04:10:39.899023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81371 ] 00:21:55.484 [2024-12-09 04:10:40.041521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.484 [2024-12-09 04:10:40.104427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.484 [2024-12-09 04:10:40.181417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:55.484 Running I/O for 90 seconds... 00:21:55.484 8197.00 IOPS, 32.02 MiB/s [2024-12-09T04:11:37.434Z] 8560.50 IOPS, 33.44 MiB/s [2024-12-09T04:11:37.434Z] 8779.00 IOPS, 34.29 MiB/s [2024-12-09T04:11:37.434Z] 8871.25 IOPS, 34.65 MiB/s [2024-12-09T04:11:37.434Z] 8912.20 IOPS, 34.81 MiB/s [2024-12-09T04:11:37.434Z] 8954.83 IOPS, 34.98 MiB/s [2024-12-09T04:11:37.434Z] 8962.43 IOPS, 35.01 MiB/s [2024-12-09T04:11:37.434Z] 8953.12 IOPS, 34.97 MiB/s [2024-12-09T04:11:37.434Z] [2024-12-09 04:10:49.439663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.439761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.439841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.439861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.439882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.439897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.439916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.439930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.439950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.439964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.439983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.439997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.485 [2024-12-09 04:10:49.440589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.440972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.440986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.441012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.441026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:55.485 [2024-12-09 04:10:49.441046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.485 [2024-12-09 04:10:49.441060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.441542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.441972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.441993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.442007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.442041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.442076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.442109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.486 [2024-12-09 04:10:49.442143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:55.486 [2024-12-09 04:10:49.442550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.486 [2024-12-09 04:10:49.442564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.442982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.442996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.443044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.443077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.443111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.487 [2024-12-09 04:10:49.443681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.443757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.487 [2024-12-09 04:10:49.443793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:55.487 [2024-12-09 04:10:49.443812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.443826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.443846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.443869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.443890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.443905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.443925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.443945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.443966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.443980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.444014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.444033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.488 [2024-12-09 04:10:49.444047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.444066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.488 [2024-12-09 04:10:49.444080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.444100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.488 [2024-12-09 04:10:49.444113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.444132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.488 [2024-12-09 04:10:49.444146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.444176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.488 [2024-12-09 04:10:49.444193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.444220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.488 [2024-12-09 04:10:49.444235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.444255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.488 [2024-12-09 04:10:49.444270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.445728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.488 [2024-12-09 04:10:49.445768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.445795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.445812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.445832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.445846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.445866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.445880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.445900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.445914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.445934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.445948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.445967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.445987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.446008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.446022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:49.446055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:49.446074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:55.488 8960.44 IOPS, 35.00 MiB/s [2024-12-09T04:11:37.438Z] 8993.20 IOPS, 35.13 MiB/s [2024-12-09T04:11:37.438Z] 9006.91 IOPS, 35.18 MiB/s [2024-12-09T04:11:37.438Z] 8986.83 IOPS, 35.10 MiB/s [2024-12-09T04:11:37.438Z] 8957.85 IOPS, 34.99 MiB/s [2024-12-09T04:11:37.438Z] 8906.57 IOPS, 34.79 MiB/s [2024-12-09T04:11:37.438Z] [2024-12-09 04:10:56.013254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.488 [2024-12-09 04:10:56.013862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:55.488 [2024-12-09 04:10:56.013883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.013912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.013932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.013945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.013968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.013982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.489 [2024-12-09 04:10:56.014864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.014968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.014987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.015000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.015018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.015031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.015049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.015063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:55.489 [2024-12-09 04:10:56.015082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.489 [2024-12-09 04:10:56.015095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.015670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.015974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.015993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.016006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.016038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.016070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.016131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.016177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.016249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.016282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.016315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.016350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.016382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.490 [2024-12-09 04:10:56.016414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:55.490 [2024-12-09 04:10:56.016434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.490 [2024-12-09 04:10:56.016447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.016879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.491 [2024-12-09 04:10:56.016912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.491 [2024-12-09 04:10:56.016946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.491 [2024-12-09 04:10:56.016979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.016999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.491 [2024-12-09 04:10:56.017012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.017034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.491 [2024-12-09 04:10:56.017048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.017084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.491 [2024-12-09 04:10:56.017098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.017118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.491 [2024-12-09 04:10:56.017132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.491 [2024-12-09 04:10:56.018510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.018551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.018586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.018619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.018663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.018696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.018731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.018765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.018980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:55.491 [2024-12-09 04:10:56.019577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.491 [2024-12-09 04:10:56.019591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.019611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.019624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.019644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.019674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.019695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.019710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.019872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.019896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.019920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.019935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.019954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.019968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.019988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.020626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.020640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.492 [2024-12-09 04:10:56.021073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.492 [2024-12-09 04:10:56.021106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.492 [2024-12-09 04:10:56.021139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.492 [2024-12-09 04:10:56.021186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.492 [2024-12-09 04:10:56.021221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.492 [2024-12-09 04:10:56.021276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.492 [2024-12-09 04:10:56.021312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.492 [2024-12-09 04:10:56.021347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.492 [2024-12-09 04:10:56.021702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:55.492 [2024-12-09 04:10:56.021729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.021744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.021764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.021777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.021796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.021810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.021835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.021850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.021869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.021884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.022697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.022737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.022770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.022804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.022836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.022869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.022909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.022955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.022974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.022988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.023546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.023566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.493 [2024-12-09 04:10:56.034578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:55.493 [2024-12-09 04:10:56.034957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.493 [2024-12-09 04:10:56.034971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.034990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.035795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.035976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.035990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.036009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.036022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.036041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.494 [2024-12-09 04:10:56.036054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.036073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.036086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.036105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.036119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.036138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.494 [2024-12-09 04:10:56.036152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:55.494 [2024-12-09 04:10:56.036198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.036978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.036992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.495 [2024-12-09 04:10:56.037546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.495 [2024-12-09 04:10:56.037608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:55.495 [2024-12-09 04:10:56.037629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.495 [2024-12-09 04:10:56.037644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.496 [2024-12-09 04:10:56.037680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.496 [2024-12-09 04:10:56.037717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.496 [2024-12-09 04:10:56.037753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.496 [2024-12-09 04:10:56.037789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.496 [2024-12-09 04:10:56.037825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.496 [2024-12-09 04:10:56.037862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.037904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.037940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.037977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.037991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.038442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.038457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:10:56.039176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:10:56.039257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:55.496 8708.53 IOPS, 34.02 MiB/s [2024-12-09T04:11:37.446Z] 8296.75 IOPS, 32.41 MiB/s [2024-12-09T04:11:37.446Z] 8335.76 IOPS, 32.56 MiB/s [2024-12-09T04:11:37.446Z] 8368.67 IOPS, 32.69 MiB/s [2024-12-09T04:11:37.446Z] 8395.16 IOPS, 32.79 MiB/s [2024-12-09T04:11:37.446Z] 8418.60 IOPS, 32.89 MiB/s [2024-12-09T04:11:37.446Z] 8466.86 IOPS, 33.07 MiB/s [2024-12-09T04:11:37.446Z] [2024-12-09 04:11:03.205667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.205779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.205859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.205879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.205900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.205914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.205933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.205947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.205965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.205978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.205997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.496 [2024-12-09 04:11:03.206391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:55.496 [2024-12-09 04:11:03.206411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.206981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.206994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.497 [2024-12-09 04:11:03.207306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.497 [2024-12-09 04:11:03.207341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.497 [2024-12-09 04:11:03.207375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.497 [2024-12-09 04:11:03.207410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.497 [2024-12-09 04:11:03.207444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.497 [2024-12-09 04:11:03.207478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.497 [2024-12-09 04:11:03.207512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.497 [2024-12-09 04:11:03.207547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.497 [2024-12-09 04:11:03.207806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:55.497 [2024-12-09 04:11:03.207825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.207838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.207858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.207872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.207892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.207905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.207924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.207938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.207957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.207971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.207990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.208004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.208043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.208078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.498 [2024-12-09 04:11:03.208112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.208974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.208997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.498 [2024-12-09 04:11:03.209369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:55.498 [2024-12-09 04:11:03.209392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.209405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.209442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.209479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.209516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.209552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.209588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.209624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.209663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.209718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.209756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.209800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.209839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.209877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.209913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.209949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.209972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.209986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.499 [2024-12-09 04:11:03.210658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.210699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.210736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.210772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.210819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:55.499 [2024-12-09 04:11:03.210842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.499 [2024-12-09 04:11:03.210856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:03.210879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:03.210904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:03.210927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:03.210941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:03.210966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:03.210980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:55.500 8444.91 IOPS, 32.99 MiB/s [2024-12-09T04:11:37.450Z] 8077.74 IOPS, 31.55 MiB/s [2024-12-09T04:11:37.450Z] 7741.17 IOPS, 30.24 MiB/s [2024-12-09T04:11:37.450Z] 7431.52 IOPS, 29.03 MiB/s [2024-12-09T04:11:37.450Z] 7145.69 IOPS, 27.91 MiB/s [2024-12-09T04:11:37.450Z] 6881.04 IOPS, 26.88 MiB/s [2024-12-09T04:11:37.450Z] 6635.29 IOPS, 25.92 MiB/s [2024-12-09T04:11:37.450Z] 6429.48 IOPS, 25.12 MiB/s [2024-12-09T04:11:37.450Z] 6542.43 IOPS, 25.56 MiB/s [2024-12-09T04:11:37.450Z] 6658.81 IOPS, 26.01 MiB/s [2024-12-09T04:11:37.450Z] 6767.22 IOPS, 26.43 MiB/s [2024-12-09T04:11:37.450Z] 6871.97 IOPS, 26.84 MiB/s [2024-12-09T04:11:37.450Z] 6967.50 IOPS, 27.22 MiB/s [2024-12-09T04:11:37.450Z] 7047.63 IOPS, 27.53 MiB/s [2024-12-09T04:11:37.450Z] [2024-12-09 04:11:16.641257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.641323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.641422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.641462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.641495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.641529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.641589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.641625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.641658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.641974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.641988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.642031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.642064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.642097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.642130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.642163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.500 [2024-12-09 04:11:16.642213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.500 [2024-12-09 04:11:16.642553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.500 [2024-12-09 04:11:16.642592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.642974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.642986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.501 [2024-12-09 04:11:16.643458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.501 [2024-12-09 04:11:16.643697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.501 [2024-12-09 04:11:16.643711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.643975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.643990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.502 [2024-12-09 04:11:16.644167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.502 [2024-12-09 04:11:16.644222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.502 [2024-12-09 04:11:16.644250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.502 [2024-12-09 04:11:16.644278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.502 [2024-12-09 04:11:16.644306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.502 [2024-12-09 04:11:16.644341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.502 [2024-12-09 04:11:16.644370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:55.502 [2024-12-09 04:11:16.644398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.502 [2024-12-09 04:11:16.644878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.502 [2024-12-09 04:11:16.644898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.644913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.503 [2024-12-09 04:11:16.644926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.644947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.503 [2024-12-09 04:11:16.644961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.644975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.503 [2024-12-09 04:11:16.644987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.503 [2024-12-09 04:11:16.645014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.503 [2024-12-09 04:11:16.645041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.503 [2024-12-09 04:11:16.645075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.503 [2024-12-09 04:11:16.645115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785290 is same with the state(6) to be set 00:21:55.503 [2024-12-09 04:11:16.645148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35248 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35640 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35648 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35656 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35664 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35672 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35680 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35688 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.645555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:55.503 [2024-12-09 04:11:16.645564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.503 [2024-12-09 04:11:16.645574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35696 len:8 PRP1 0x0 PRP2 0x0 00:21:55.503 [2024-12-09 04:11:16.645586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.646843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:55.503 [2024-12-09 04:11:16.646936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:55.503 [2024-12-09 04:11:16.646958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.503 [2024-12-09 04:11:16.646989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f61e0 (9): Bad file descriptor 00:21:55.503 [2024-12-09 04:11:16.647392] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.503 [2024-12-09 04:11:16.647424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f61e0 with addr=10.0.0.3, port=4421 00:21:55.503 [2024-12-09 04:11:16.647440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f61e0 is same with the state(6) to be set 00:21:55.503 [2024-12-09 04:11:16.647472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f61e0 (9): Bad file descriptor 00:21:55.503 [2024-12-09 04:11:16.647503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:55.503 [2024-12-09 04:11:16.647519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:55.503 [2024-12-09 04:11:16.647533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:55.503 [2024-12-09 04:11:16.647547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:55.503 [2024-12-09 04:11:16.647563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:55.503 7120.89 IOPS, 27.82 MiB/s [2024-12-09T04:11:37.453Z] 7183.57 IOPS, 28.06 MiB/s [2024-12-09T04:11:37.453Z] 7253.05 IOPS, 28.33 MiB/s [2024-12-09T04:11:37.453Z] 7319.28 IOPS, 28.59 MiB/s [2024-12-09T04:11:37.453Z] 7380.80 IOPS, 28.83 MiB/s [2024-12-09T04:11:37.453Z] 7438.83 IOPS, 29.06 MiB/s [2024-12-09T04:11:37.453Z] 7490.48 IOPS, 29.26 MiB/s [2024-12-09T04:11:37.453Z] 7538.60 IOPS, 29.45 MiB/s [2024-12-09T04:11:37.453Z] 7588.91 IOPS, 29.64 MiB/s [2024-12-09T04:11:37.453Z] 7641.29 IOPS, 29.85 MiB/s [2024-12-09T04:11:37.453Z] [2024-12-09 04:11:26.705447] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:55.503 7688.22 IOPS, 30.03 MiB/s [2024-12-09T04:11:37.453Z] 7736.38 IOPS, 30.22 MiB/s [2024-12-09T04:11:37.453Z] 7778.54 IOPS, 30.38 MiB/s [2024-12-09T04:11:37.453Z] 7820.45 IOPS, 30.55 MiB/s [2024-12-09T04:11:37.453Z] 7849.16 IOPS, 30.66 MiB/s [2024-12-09T04:11:37.453Z] 7872.67 IOPS, 30.75 MiB/s [2024-12-09T04:11:37.453Z] 7893.58 IOPS, 30.83 MiB/s [2024-12-09T04:11:37.453Z] 7915.17 IOPS, 30.92 MiB/s [2024-12-09T04:11:37.453Z] 7932.43 IOPS, 30.99 MiB/s [2024-12-09T04:11:37.453Z] 7958.58 IOPS, 31.09 MiB/s [2024-12-09T04:11:37.453Z] Received shutdown signal, test time was about 55.628896 seconds 00:21:55.503 00:21:55.503 Latency(us) 00:21:55.503 [2024-12-09T04:11:37.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.504 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:55.504 Verification LBA range: start 0x0 length 0x4000 00:21:55.504 Nvme0n1 : 55.63 7973.49 31.15 0.00 0.00 16027.12 714.94 7046430.72 00:21:55.504 [2024-12-09T04:11:37.454Z] =================================================================================================================== 00:21:55.504 [2024-12-09T04:11:37.454Z] Total : 7973.49 31.15 0.00 0.00 16027.12 714.94 7046430.72 00:21:55.504 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:55.762 rmmod nvme_tcp 00:21:55.762 rmmod nvme_fabrics 00:21:55.762 rmmod nvme_keyring 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81323 ']' 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81323 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81323 ']' 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81323 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81323 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.762 killing process with pid 81323 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81323' 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81323 00:21:55.762 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81323 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:56.021 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:56.280 04:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:21:56.280 00:21:56.280 real 1m0.934s 00:21:56.280 user 2m48.086s 00:21:56.280 sys 0m19.000s 00:21:56.280 ************************************ 00:21:56.280 END TEST nvmf_host_multipath 00:21:56.280 ************************************ 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.280 ************************************ 00:21:56.280 START TEST nvmf_timeout 00:21:56.280 ************************************ 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:56.280 * Looking for test storage... 00:21:56.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:56.280 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:56.281 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.540 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.541 --rc genhtml_branch_coverage=1 00:21:56.541 --rc genhtml_function_coverage=1 00:21:56.541 --rc genhtml_legend=1 00:21:56.541 --rc geninfo_all_blocks=1 00:21:56.541 --rc geninfo_unexecuted_blocks=1 00:21:56.541 00:21:56.541 ' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.541 --rc genhtml_branch_coverage=1 00:21:56.541 --rc genhtml_function_coverage=1 00:21:56.541 --rc genhtml_legend=1 00:21:56.541 --rc geninfo_all_blocks=1 00:21:56.541 --rc geninfo_unexecuted_blocks=1 00:21:56.541 00:21:56.541 ' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.541 --rc genhtml_branch_coverage=1 00:21:56.541 --rc genhtml_function_coverage=1 00:21:56.541 --rc genhtml_legend=1 00:21:56.541 --rc geninfo_all_blocks=1 00:21:56.541 --rc geninfo_unexecuted_blocks=1 00:21:56.541 00:21:56.541 ' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.541 --rc genhtml_branch_coverage=1 00:21:56.541 --rc genhtml_function_coverage=1 00:21:56.541 --rc genhtml_legend=1 00:21:56.541 --rc geninfo_all_blocks=1 00:21:56.541 --rc geninfo_unexecuted_blocks=1 00:21:56.541 00:21:56.541 ' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.541 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:56.541 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:56.542 Cannot find device "nvmf_init_br" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:56.542 Cannot find device "nvmf_init_br2" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:56.542 Cannot find device "nvmf_tgt_br" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:56.542 Cannot find device "nvmf_tgt_br2" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:56.542 Cannot find device "nvmf_init_br" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:56.542 Cannot find device "nvmf_init_br2" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:56.542 Cannot find device "nvmf_tgt_br" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:56.542 Cannot find device "nvmf_tgt_br2" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:56.542 Cannot find device "nvmf_br" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:56.542 Cannot find device "nvmf_init_if" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:56.542 Cannot find device "nvmf_init_if2" 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:21:56.542 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:56.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:56.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:21:56.802 00:21:56.802 --- 10.0.0.3 ping statistics --- 00:21:56.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.802 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:56.802 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:56.802 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:21:56.802 00:21:56.802 --- 10.0.0.4 ping statistics --- 00:21:56.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.802 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:56.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:56.802 00:21:56.802 --- 10.0.0.1 ping statistics --- 00:21:56.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.802 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:56.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:21:56.802 00:21:56.802 --- 10.0.0.2 ping statistics --- 00:21:56.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.802 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82527 00:21:56.802 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82527 00:21:56.803 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82527 ']' 00:21:56.803 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.803 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.803 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.803 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.803 04:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:57.062 [2024-12-09 04:11:38.784534] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:21:57.062 [2024-12-09 04:11:38.784653] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.062 [2024-12-09 04:11:38.928087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:57.062 [2024-12-09 04:11:38.994066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.062 [2024-12-09 04:11:38.994124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.063 [2024-12-09 04:11:38.994149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.063 [2024-12-09 04:11:38.994157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.063 [2024-12-09 04:11:38.994163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.063 [2024-12-09 04:11:38.995597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.063 [2024-12-09 04:11:38.995606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.321 [2024-12-09 04:11:39.067525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:57.889 04:11:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.889 04:11:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:57.889 04:11:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:57.889 04:11:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:57.889 04:11:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:57.889 04:11:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.889 04:11:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.889 04:11:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:58.148 [2024-12-09 04:11:40.076864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.148 04:11:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:58.406 Malloc0 00:21:58.664 04:11:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.922 04:11:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:59.182 04:11:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:59.182 [2024-12-09 04:11:41.111391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:59.182 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:59.182 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82576 00:21:59.182 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82576 /var/tmp/bdevperf.sock 00:21:59.440 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82576 ']' 00:21:59.440 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.440 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.440 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.440 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.440 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:59.440 [2024-12-09 04:11:41.174399] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:21:59.441 [2024-12-09 04:11:41.174475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82576 ] 00:21:59.441 [2024-12-09 04:11:41.310146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.441 [2024-12-09 04:11:41.362599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.699 [2024-12-09 04:11:41.437360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:59.699 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.699 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:59.699 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:59.957 04:11:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:00.216 NVMe0n1 00:22:00.216 04:11:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82587 00:22:00.216 04:11:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:00.216 04:11:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:00.474 Running I/O for 10 seconds... 00:22:01.411 04:11:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:01.411 7956.00 IOPS, 31.08 MiB/s [2024-12-09T04:11:43.361Z] [2024-12-09 04:11:43.311347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.411 [2024-12-09 04:11:43.311410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.411 [2024-12-09 04:11:43.311424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.411 [2024-12-09 04:11:43.311433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.411 [2024-12-09 04:11:43.311443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.411 [2024-12-09 04:11:43.311451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.411 [2024-12-09 04:11:43.311461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.411 [2024-12-09 04:11:43.311469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.411 [2024-12-09 04:11:43.311477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882e50 is same with the state(6) to be set 00:22:01.411 [2024-12-09 04:11:43.311720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.411 [2024-12-09 04:11:43.311738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.411 [2024-12-09 04:11:43.311757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.411 [2024-12-09 04:11:43.311767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.411 [2024-12-09 04:11:43.311778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.311981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.311991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.412 [2024-12-09 04:11:43.312984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.412 [2024-12-09 04:11:43.312994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.313981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.313995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.314015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.314035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.314056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.314075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.413 [2024-12-09 04:11:43.314095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.413 [2024-12-09 04:11:43.314115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.413 [2024-12-09 04:11:43.314134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.413 [2024-12-09 04:11:43.314153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.413 [2024-12-09 04:11:43.314173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.413 [2024-12-09 04:11:43.314184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.413 [2024-12-09 04:11:43.314193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.414 [2024-12-09 04:11:43.314479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.414 [2024-12-09 04:11:43.314500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e2690 is same with the state(6) to be set 00:22:01.414 [2024-12-09 04:11:43.314522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.414 [2024-12-09 04:11:43.314530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.414 [2024-12-09 04:11:43.314538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 PRP1 0x0 PRP2 0x0 00:22:01.414 [2024-12-09 04:11:43.314548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.414 [2024-12-09 04:11:43.314899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:01.414 [2024-12-09 04:11:43.314924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1882e50 (9): Bad file descriptor 00:22:01.414 [2024-12-09 04:11:43.315043] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.414 [2024-12-09 04:11:43.315065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1882e50 with addr=10.0.0.3, port=4420 00:22:01.414 [2024-12-09 04:11:43.315077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882e50 is same with the state(6) to be set 00:22:01.414 [2024-12-09 04:11:43.315125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1882e50 (9): Bad file descriptor 00:22:01.414 [2024-12-09 04:11:43.315141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:01.414 [2024-12-09 04:11:43.315149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:01.414 [2024-12-09 04:11:43.315160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:01.414 [2024-12-09 04:11:43.315171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:01.414 [2024-12-09 04:11:43.315198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:01.414 04:11:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:03.285 4490.00 IOPS, 17.54 MiB/s [2024-12-09T04:11:45.492Z] 2993.33 IOPS, 11.69 MiB/s [2024-12-09T04:11:45.492Z] [2024-12-09 04:11:45.315366] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.542 [2024-12-09 04:11:45.315431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1882e50 with addr=10.0.0.3, port=4420 00:22:03.542 [2024-12-09 04:11:45.315446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882e50 is same with the state(6) to be set 00:22:03.542 [2024-12-09 04:11:45.315470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1882e50 (9): Bad file descriptor 00:22:03.542 [2024-12-09 04:11:45.315490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:03.542 [2024-12-09 04:11:45.315500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:03.542 [2024-12-09 04:11:45.315511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:03.542 [2024-12-09 04:11:45.315522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:03.542 [2024-12-09 04:11:45.315533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:03.542 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:03.542 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:03.542 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:03.800 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:03.800 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:03.800 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:03.800 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:04.058 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:04.058 04:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:05.251 2245.00 IOPS, 8.77 MiB/s [2024-12-09T04:11:47.459Z] 1796.00 IOPS, 7.02 MiB/s [2024-12-09T04:11:47.459Z] [2024-12-09 04:11:47.315829] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.509 [2024-12-09 04:11:47.315877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1882e50 with addr=10.0.0.3, port=4420 00:22:05.509 [2024-12-09 04:11:47.315893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882e50 is same with the state(6) to be set 00:22:05.509 [2024-12-09 04:11:47.315918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1882e50 (9): Bad file descriptor 00:22:05.509 [2024-12-09 04:11:47.315938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:05.509 [2024-12-09 04:11:47.315948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:05.509 [2024-12-09 04:11:47.315960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:05.509 [2024-12-09 04:11:47.315971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:05.509 [2024-12-09 04:11:47.315983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:07.381 1496.67 IOPS, 5.85 MiB/s [2024-12-09T04:11:49.331Z] 1282.86 IOPS, 5.01 MiB/s [2024-12-09T04:11:49.331Z] [2024-12-09 04:11:49.316075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:07.381 [2024-12-09 04:11:49.316135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:07.381 [2024-12-09 04:11:49.316146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:07.381 [2024-12-09 04:11:49.316156] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:07.381 [2024-12-09 04:11:49.316166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:08.576 1122.50 IOPS, 4.38 MiB/s 00:22:08.576 Latency(us) 00:22:08.576 [2024-12-09T04:11:50.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.576 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:08.576 Verification LBA range: start 0x0 length 0x4000 00:22:08.576 NVMe0n1 : 8.15 1102.45 4.31 15.71 0.00 114324.39 3842.79 7015926.69 00:22:08.576 [2024-12-09T04:11:50.526Z] =================================================================================================================== 00:22:08.576 [2024-12-09T04:11:50.526Z] Total : 1102.45 4.31 15.71 0.00 114324.39 3842.79 7015926.69 00:22:08.576 { 00:22:08.576 "results": [ 00:22:08.576 { 00:22:08.576 "job": "NVMe0n1", 00:22:08.576 "core_mask": "0x4", 00:22:08.576 "workload": "verify", 00:22:08.576 "status": "finished", 00:22:08.576 "verify_range": { 00:22:08.576 "start": 0, 00:22:08.576 "length": 16384 00:22:08.576 }, 00:22:08.576 "queue_depth": 128, 00:22:08.576 "io_size": 4096, 00:22:08.576 "runtime": 8.145511, 00:22:08.576 "iops": 1102.4477162942878, 00:22:08.576 "mibps": 4.306436391774562, 00:22:08.576 "io_failed": 128, 00:22:08.576 "io_timeout": 0, 00:22:08.576 "avg_latency_us": 114324.39353215955, 00:22:08.576 "min_latency_us": 3842.7927272727275, 00:22:08.576 "max_latency_us": 7015926.69090909 00:22:08.576 } 00:22:08.576 ], 00:22:08.576 "core_count": 1 00:22:08.576 } 00:22:09.144 04:11:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:09.144 04:11:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:09.144 04:11:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:09.419 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:09.419 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:09.419 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:09.419 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82587 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82576 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82576 ']' 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82576 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82576 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:09.692 killing process with pid 82576 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82576' 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82576 00:22:09.692 Received shutdown signal, test time was about 9.291535 seconds 00:22:09.692 00:22:09.692 Latency(us) 00:22:09.692 [2024-12-09T04:11:51.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.692 [2024-12-09T04:11:51.642Z] =================================================================================================================== 00:22:09.692 [2024-12-09T04:11:51.642Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.692 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82576 00:22:09.951 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:10.209 [2024-12-09 04:11:51.926521] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82714 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82714 /var/tmp/bdevperf.sock 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82714 ']' 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.209 04:11:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:10.209 [2024-12-09 04:11:52.002742] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:22:10.209 [2024-12-09 04:11:52.002845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82714 ] 00:22:10.209 [2024-12-09 04:11:52.151249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.467 [2024-12-09 04:11:52.206698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.467 [2024-12-09 04:11:52.284081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:10.467 04:11:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.467 04:11:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:10.467 04:11:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:10.727 04:11:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:10.986 NVMe0n1 00:22:10.986 04:11:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82726 00:22:10.986 04:11:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.986 04:11:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:11.244 Running I/O for 10 seconds... 00:22:12.178 04:11:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:12.437 7956.00 IOPS, 31.08 MiB/s [2024-12-09T04:11:54.387Z] [2024-12-09 04:11:54.164561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.164983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.164994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.165003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.165013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.165022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.165032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.165040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.165051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.165059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.165070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.437 [2024-12-09 04:11:54.165078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.437 [2024-12-09 04:11:54.165088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.438 [2024-12-09 04:11:54.165096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.438 [2024-12-09 04:11:54.165511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.438 [2024-12-09 04:11:54.165532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.438 [2024-12-09 04:11:54.165741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.165990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.165998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.438 [2024-12-09 04:11:54.166377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.438 [2024-12-09 04:11:54.166389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.166984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.166993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.439 [2024-12-09 04:11:54.167579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7bb690 is same with the state(6) to be set 00:22:12.439 [2024-12-09 04:11:54.167600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.439 [2024-12-09 04:11:54.167608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.439 [2024-12-09 04:11:54.167615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71632 len:8 PRP1 0x0 PRP2 0x0 00:22:12.439 [2024-12-09 04:11:54.167624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.439 [2024-12-09 04:11:54.167960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:12.439 [2024-12-09 04:11:54.168043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75be50 (9): Bad file descriptor 00:22:12.439 [2024-12-09 04:11:54.168161] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.439 [2024-12-09 04:11:54.168199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75be50 with addr=10.0.0.3, port=4420 00:22:12.439 [2024-12-09 04:11:54.168211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75be50 is same with the state(6) to be set 00:22:12.439 [2024-12-09 04:11:54.168231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75be50 (9): Bad file descriptor 00:22:12.439 [2024-12-09 04:11:54.168249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:12.439 [2024-12-09 04:11:54.168274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:12.439 [2024-12-09 04:11:54.168287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:12.439 [2024-12-09 04:11:54.168299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:12.439 [2024-12-09 04:11:54.168311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:12.439 04:11:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:13.373 4426.50 IOPS, 17.29 MiB/s [2024-12-09T04:11:55.323Z] [2024-12-09 04:11:55.168394] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.373 [2024-12-09 04:11:55.168450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75be50 with addr=10.0.0.3, port=4420 00:22:13.373 [2024-12-09 04:11:55.168463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75be50 is same with the state(6) to be set 00:22:13.373 [2024-12-09 04:11:55.168482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75be50 (9): Bad file descriptor 00:22:13.373 [2024-12-09 04:11:55.168498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:13.373 [2024-12-09 04:11:55.168507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:13.373 [2024-12-09 04:11:55.168516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:13.373 [2024-12-09 04:11:55.168525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:13.373 [2024-12-09 04:11:55.168534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:13.373 04:11:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:13.630 [2024-12-09 04:11:55.443912] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:13.630 04:11:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82726 00:22:14.467 2951.00 IOPS, 11.53 MiB/s [2024-12-09T04:11:56.417Z] [2024-12-09 04:11:56.184302] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:16.339 2213.25 IOPS, 8.65 MiB/s [2024-12-09T04:11:59.225Z] 3613.40 IOPS, 14.11 MiB/s [2024-12-09T04:12:00.162Z] 4811.50 IOPS, 18.79 MiB/s [2024-12-09T04:12:01.125Z] 5645.29 IOPS, 22.05 MiB/s [2024-12-09T04:12:02.058Z] 6248.62 IOPS, 24.41 MiB/s [2024-12-09T04:12:03.430Z] 6727.22 IOPS, 26.28 MiB/s 00:22:21.480 Latency(us) 00:22:21.480 [2024-12-09T04:12:03.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.480 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.480 Verification LBA range: start 0x0 length 0x4000 00:22:21.480 NVMe0n1 : 10.01 7084.64 27.67 0.00 0.00 18026.40 1020.28 3019898.88 00:22:21.480 [2024-12-09T04:12:03.430Z] =================================================================================================================== 00:22:21.480 [2024-12-09T04:12:03.430Z] Total : 7084.64 27.67 0.00 0.00 18026.40 1020.28 3019898.88 00:22:21.480 { 00:22:21.480 "results": [ 00:22:21.480 { 00:22:21.480 "job": "NVMe0n1", 00:22:21.480 "core_mask": "0x4", 00:22:21.480 "workload": "verify", 00:22:21.480 "status": "finished", 00:22:21.480 "verify_range": { 00:22:21.480 "start": 0, 00:22:21.480 "length": 16384 00:22:21.480 }, 00:22:21.480 "queue_depth": 128, 00:22:21.480 "io_size": 4096, 00:22:21.480 "runtime": 10.006572, 00:22:21.480 "iops": 7084.6439719816135, 00:22:21.480 "mibps": 27.674390515553178, 00:22:21.480 "io_failed": 0, 00:22:21.480 "io_timeout": 0, 00:22:21.480 "avg_latency_us": 18026.40175485976, 00:22:21.480 "min_latency_us": 1020.2763636363636, 00:22:21.480 "max_latency_us": 3019898.88 00:22:21.480 } 00:22:21.480 ], 00:22:21.480 "core_count": 1 00:22:21.480 } 00:22:21.480 04:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82836 00:22:21.480 04:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.480 04:12:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:21.481 Running I/O for 10 seconds... 00:22:22.414 04:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:22.414 6933.00 IOPS, 27.08 MiB/s [2024-12-09T04:12:04.364Z] [2024-12-09 04:12:04.338291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.414 [2024-12-09 04:12:04.338680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.414 [2024-12-09 04:12:04.338690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.338984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.338995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.415 [2024-12-09 04:12:04.339739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.415 [2024-12-09 04:12:04.339760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.415 [2024-12-09 04:12:04.339781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.415 [2024-12-09 04:12:04.339802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.415 [2024-12-09 04:12:04.339822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.415 [2024-12-09 04:12:04.339843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.415 [2024-12-09 04:12:04.339855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.339864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.339875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.339886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.339910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.339920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.339931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.339941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.339952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.339961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.339972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.339982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.339992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.416 [2024-12-09 04:12:04.340086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.416 [2024-12-09 04:12:04.340107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.416 [2024-12-09 04:12:04.340288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.340981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.340992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.341002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.341013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.341022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.341033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.341042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.341054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.416 [2024-12-09 04:12:04.341063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.416 [2024-12-09 04:12:04.341075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.417 [2024-12-09 04:12:04.341084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.417 [2024-12-09 04:12:04.341095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.417 [2024-12-09 04:12:04.341105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.417 [2024-12-09 04:12:04.341116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.417 [2024-12-09 04:12:04.341126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.417 [2024-12-09 04:12:04.341137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b9fd0 is same with the state(6) to be set 00:22:22.417 [2024-12-09 04:12:04.341149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.417 [2024-12-09 04:12:04.341156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.417 [2024-12-09 04:12:04.341173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65168 len:8 PRP1 0x0 PRP2 0x0 00:22:22.417 [2024-12-09 04:12:04.341191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.417 [2024-12-09 04:12:04.341500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:22.417 [2024-12-09 04:12:04.341588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75be50 (9): Bad file descriptor 00:22:22.417 [2024-12-09 04:12:04.341705] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-12-09 04:12:04.341728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75be50 with addr=10.0.0.3, port=4420 00:22:22.417 [2024-12-09 04:12:04.341739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75be50 is same with the state(6) to be set 00:22:22.417 [2024-12-09 04:12:04.341759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75be50 (9): Bad file descriptor 00:22:22.417 [2024-12-09 04:12:04.341777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:22.417 [2024-12-09 04:12:04.341786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:22.417 [2024-12-09 04:12:04.341798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:22.417 [2024-12-09 04:12:04.341835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:22.417 [2024-12-09 04:12:04.341870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:22.417 04:12:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:23.606 4042.50 IOPS, 15.79 MiB/s [2024-12-09T04:12:05.556Z] [2024-12-09 04:12:05.342013] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-12-09 04:12:05.342273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75be50 with addr=10.0.0.3, port=4420 00:22:23.606 [2024-12-09 04:12:05.342463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75be50 is same with the state(6) to be set 00:22:23.606 [2024-12-09 04:12:05.342498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75be50 (9): Bad file descriptor 00:22:23.606 [2024-12-09 04:12:05.342537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:23.606 [2024-12-09 04:12:05.342552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:23.606 [2024-12-09 04:12:05.342565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:23.606 [2024-12-09 04:12:05.342578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:23.606 [2024-12-09 04:12:05.342591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:24.549 2695.00 IOPS, 10.53 MiB/s [2024-12-09T04:12:06.499Z] [2024-12-09 04:12:06.342766] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.549 [2024-12-09 04:12:06.342993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75be50 with addr=10.0.0.3, port=4420 00:22:24.549 [2024-12-09 04:12:06.343153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75be50 is same with the state(6) to be set 00:22:24.549 [2024-12-09 04:12:06.343421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75be50 (9): Bad file descriptor 00:22:24.549 [2024-12-09 04:12:06.343467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:24.549 [2024-12-09 04:12:06.343482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:24.549 [2024-12-09 04:12:06.343494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:24.549 [2024-12-09 04:12:06.343515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:24.549 [2024-12-09 04:12:06.343529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:25.481 2021.25 IOPS, 7.90 MiB/s [2024-12-09T04:12:07.431Z] [2024-12-09 04:12:07.345997] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.481 [2024-12-09 04:12:07.346077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75be50 with addr=10.0.0.3, port=4420 00:22:25.481 [2024-12-09 04:12:07.346095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75be50 is same with the state(6) to be set 00:22:25.481 [2024-12-09 04:12:07.346407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75be50 (9): Bad file descriptor 00:22:25.481 [2024-12-09 04:12:07.346678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:25.481 [2024-12-09 04:12:07.346836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:25.481 [2024-12-09 04:12:07.346855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:25.481 [2024-12-09 04:12:07.346869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:25.481 [2024-12-09 04:12:07.346883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:25.481 04:12:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:25.739 [2024-12-09 04:12:07.623760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:25.739 04:12:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82836 00:22:26.563 1617.00 IOPS, 6.32 MiB/s [2024-12-09T04:12:08.513Z] [2024-12-09 04:12:08.373825] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:28.430 2762.00 IOPS, 10.79 MiB/s [2024-12-09T04:12:11.314Z] 3922.71 IOPS, 15.32 MiB/s [2024-12-09T04:12:12.248Z] 4784.38 IOPS, 18.69 MiB/s [2024-12-09T04:12:13.624Z] 5460.22 IOPS, 21.33 MiB/s [2024-12-09T04:12:13.624Z] 6016.30 IOPS, 23.50 MiB/s 00:22:31.674 Latency(us) 00:22:31.674 [2024-12-09T04:12:13.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.674 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.674 Verification LBA range: start 0x0 length 0x4000 00:22:31.674 NVMe0n1 : 10.01 6024.30 23.53 4048.57 0.00 12681.93 718.66 3019898.88 00:22:31.674 [2024-12-09T04:12:13.624Z] =================================================================================================================== 00:22:31.674 [2024-12-09T04:12:13.624Z] Total : 6024.30 23.53 4048.57 0.00 12681.93 0.00 3019898.88 00:22:31.674 { 00:22:31.674 "results": [ 00:22:31.674 { 00:22:31.674 "job": "NVMe0n1", 00:22:31.674 "core_mask": "0x4", 00:22:31.674 "workload": "verify", 00:22:31.674 "status": "finished", 00:22:31.674 "verify_range": { 00:22:31.674 "start": 0, 00:22:31.674 "length": 16384 00:22:31.674 }, 00:22:31.674 "queue_depth": 128, 00:22:31.674 "io_size": 4096, 00:22:31.674 "runtime": 10.00797, 00:22:31.674 "iops": 6024.298633988711, 00:22:31.674 "mibps": 23.532416539018403, 00:22:31.674 "io_failed": 40518, 00:22:31.674 "io_timeout": 0, 00:22:31.674 "avg_latency_us": 12681.929636946197, 00:22:31.674 "min_latency_us": 718.6618181818181, 00:22:31.674 "max_latency_us": 3019898.88 00:22:31.674 } 00:22:31.674 ], 00:22:31.674 "core_count": 1 00:22:31.674 } 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82714 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82714 ']' 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82714 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82714 00:22:31.674 killing process with pid 82714 00:22:31.674 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.674 00:22:31.674 Latency(us) 00:22:31.674 [2024-12-09T04:12:13.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.674 [2024-12-09T04:12:13.624Z] =================================================================================================================== 00:22:31.674 [2024-12-09T04:12:13.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82714' 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82714 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82714 00:22:31.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82946 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82946 /var/tmp/bdevperf.sock 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82946 ']' 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.674 04:12:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:31.674 [2024-12-09 04:12:13.593135] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:22:31.674 [2024-12-09 04:12:13.594143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82946 ] 00:22:31.932 [2024-12-09 04:12:13.743239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.932 [2024-12-09 04:12:13.825621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.189 [2024-12-09 04:12:13.906460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:32.754 04:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.754 04:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:32.754 04:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82961 00:22:32.754 04:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82946 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:32.754 04:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:33.011 04:12:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:33.582 NVMe0n1 00:22:33.582 04:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=83008 00:22:33.582 04:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:33.582 04:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:33.582 Running I/O for 10 seconds... 00:22:34.516 04:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:34.778 16764.00 IOPS, 65.48 MiB/s [2024-12-09T04:12:16.728Z] [2024-12-09 04:12:16.514180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.778 [2024-12-09 04:12:16.514536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.514996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.779 [2024-12-09 04:12:16.515137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.780 [2024-12-09 04:12:16.515144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.780 [2024-12-09 04:12:16.515151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.780 [2024-12-09 04:12:16.515158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.780 [2024-12-09 04:12:16.515180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74e10 is same with the state(6) to be set 00:22:34.780 [2024-12-09 04:12:16.515264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.515979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.515988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.516001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.516010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.516036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.516045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.516056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.516074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.516085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.516095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.516106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.516115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.516126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.516135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.780 [2024-12-09 04:12:16.516146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.780 [2024-12-09 04:12:16.516155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.781 [2024-12-09 04:12:16.516981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.781 [2024-12-09 04:12:16.516990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.782 [2024-12-09 04:12:16.517780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.782 [2024-12-09 04:12:16.517789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.517985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.517996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.518005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.518015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.783 [2024-12-09 04:12:16.518030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.518041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680920 is same with the state(6) to be set 00:22:34.783 [2024-12-09 04:12:16.518052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:34.783 [2024-12-09 04:12:16.518059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:34.783 [2024-12-09 04:12:16.518073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108664 len:8 PRP1 0x0 PRP2 0x0 00:22:34.783 [2024-12-09 04:12:16.518082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.783 [2024-12-09 04:12:16.518437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:34.783 [2024-12-09 04:12:16.518525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1613e50 (9): Bad file descriptor 00:22:34.783 [2024-12-09 04:12:16.518688] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.783 [2024-12-09 04:12:16.518709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1613e50 with addr=10.0.0.3, port=4420 00:22:34.783 [2024-12-09 04:12:16.518720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1613e50 is same with the state(6) to be set 00:22:34.783 [2024-12-09 04:12:16.518738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1613e50 (9): Bad file descriptor 00:22:34.783 [2024-12-09 04:12:16.518753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:34.783 [2024-12-09 04:12:16.518762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:34.783 [2024-12-09 04:12:16.518773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:34.783 [2024-12-09 04:12:16.518784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:34.783 [2024-12-09 04:12:16.518794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:34.783 04:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 83008 00:22:36.712 9526.00 IOPS, 37.21 MiB/s [2024-12-09T04:12:18.662Z] 6350.67 IOPS, 24.81 MiB/s [2024-12-09T04:12:18.662Z] [2024-12-09 04:12:18.519015] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.712 [2024-12-09 04:12:18.519270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1613e50 with addr=10.0.0.3, port=4420 00:22:36.712 [2024-12-09 04:12:18.519305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1613e50 is same with the state(6) to be set 00:22:36.712 [2024-12-09 04:12:18.519353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1613e50 (9): Bad file descriptor 00:22:36.712 [2024-12-09 04:12:18.519378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:36.712 [2024-12-09 04:12:18.519389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:36.712 [2024-12-09 04:12:18.519401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:36.712 [2024-12-09 04:12:18.519413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:36.712 [2024-12-09 04:12:18.519425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:38.583 4763.00 IOPS, 18.61 MiB/s [2024-12-09T04:12:20.533Z] 3810.40 IOPS, 14.88 MiB/s [2024-12-09T04:12:20.533Z] [2024-12-09 04:12:20.519576] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.583 [2024-12-09 04:12:20.519643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1613e50 with addr=10.0.0.3, port=4420 00:22:38.583 [2024-12-09 04:12:20.519658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1613e50 is same with the state(6) to be set 00:22:38.583 [2024-12-09 04:12:20.519680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1613e50 (9): Bad file descriptor 00:22:38.583 [2024-12-09 04:12:20.519699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:38.583 [2024-12-09 04:12:20.519708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:38.583 [2024-12-09 04:12:20.519726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:38.583 [2024-12-09 04:12:20.519736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:38.583 [2024-12-09 04:12:20.519747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:40.449 3175.33 IOPS, 12.40 MiB/s [2024-12-09T04:12:22.656Z] 2721.71 IOPS, 10.63 MiB/s [2024-12-09T04:12:22.656Z] [2024-12-09 04:12:22.519820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:40.706 [2024-12-09 04:12:22.519901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:40.706 [2024-12-09 04:12:22.519913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:40.706 [2024-12-09 04:12:22.519922] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:22:40.706 [2024-12-09 04:12:22.519933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:41.638 2381.50 IOPS, 9.30 MiB/s 00:22:41.638 Latency(us) 00:22:41.638 [2024-12-09T04:12:23.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.638 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:41.638 NVMe0n1 : 8.15 2337.28 9.13 15.70 0.00 54315.64 6970.65 7015926.69 00:22:41.638 [2024-12-09T04:12:23.588Z] =================================================================================================================== 00:22:41.638 [2024-12-09T04:12:23.588Z] Total : 2337.28 9.13 15.70 0.00 54315.64 6970.65 7015926.69 00:22:41.638 { 00:22:41.638 "results": [ 00:22:41.638 { 00:22:41.638 "job": "NVMe0n1", 00:22:41.638 "core_mask": "0x4", 00:22:41.638 "workload": "randread", 00:22:41.638 "status": "finished", 00:22:41.638 "queue_depth": 128, 00:22:41.638 "io_size": 4096, 00:22:41.638 "runtime": 8.151341, 00:22:41.638 "iops": 2337.2841352116175, 00:22:41.638 "mibps": 9.13001615317038, 00:22:41.638 "io_failed": 128, 00:22:41.638 "io_timeout": 0, 00:22:41.638 "avg_latency_us": 54315.64303725472, 00:22:41.638 "min_latency_us": 6970.647272727273, 00:22:41.638 "max_latency_us": 7015926.69090909 00:22:41.638 } 00:22:41.638 ], 00:22:41.638 "core_count": 1 00:22:41.638 } 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:41.638 Attaching 5 probes... 00:22:41.638 1413.054185: reset bdev controller NVMe0 00:22:41.638 1413.205770: reconnect bdev controller NVMe0 00:22:41.638 3413.489378: reconnect delay bdev controller NVMe0 00:22:41.638 3413.528805: reconnect bdev controller NVMe0 00:22:41.638 5414.082595: reconnect delay bdev controller NVMe0 00:22:41.638 5414.118897: reconnect bdev controller NVMe0 00:22:41.638 7414.403440: reconnect delay bdev controller NVMe0 00:22:41.638 7414.441346: reconnect bdev controller NVMe0 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82961 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82946 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82946 ']' 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82946 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.638 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82946 00:22:41.896 killing process with pid 82946 00:22:41.896 Received shutdown signal, test time was about 8.226861 seconds 00:22:41.896 00:22:41.896 Latency(us) 00:22:41.896 [2024-12-09T04:12:23.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.896 [2024-12-09T04:12:23.846Z] =================================================================================================================== 00:22:41.896 [2024-12-09T04:12:23.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.896 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:41.896 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:41.896 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82946' 00:22:41.896 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82946 00:22:41.896 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82946 00:22:41.896 04:12:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.153 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:42.154 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:42.154 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.154 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.412 rmmod nvme_tcp 00:22:42.412 rmmod nvme_fabrics 00:22:42.412 rmmod nvme_keyring 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82527 ']' 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82527 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82527 ']' 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82527 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82527 00:22:42.412 killing process with pid 82527 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82527' 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82527 00:22:42.412 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82527 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:42.670 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.928 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:22:42.929 ************************************ 00:22:42.929 END TEST nvmf_timeout 00:22:42.929 ************************************ 00:22:42.929 00:22:42.929 real 0m46.608s 00:22:42.929 user 2m15.902s 00:22:42.929 sys 0m5.756s 00:22:42.929 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.929 04:12:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:42.929 04:12:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:42.929 04:12:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:42.929 ************************************ 00:22:42.929 END TEST nvmf_host 00:22:42.929 ************************************ 00:22:42.929 00:22:42.929 real 5m9.505s 00:22:42.929 user 13m27.305s 00:22:42.929 sys 1m13.084s 00:22:42.929 04:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.929 04:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.929 04:12:24 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:22:42.929 04:12:24 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:22:42.929 00:22:42.929 real 13m5.574s 00:22:42.929 user 31m30.262s 00:22:42.929 sys 3m19.707s 00:22:42.929 04:12:24 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.929 04:12:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.929 ************************************ 00:22:42.929 END TEST nvmf_tcp 00:22:42.929 ************************************ 00:22:43.187 04:12:24 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:22:43.187 04:12:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:43.187 04:12:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:43.187 04:12:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.187 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:22:43.187 ************************************ 00:22:43.187 START TEST nvmf_dif 00:22:43.187 ************************************ 00:22:43.187 04:12:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:43.187 * Looking for test storage... 00:22:43.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:43.187 04:12:24 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.187 04:12:24 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.187 04:12:24 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.187 04:12:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.187 04:12:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:22:43.187 04:12:25 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.187 04:12:25 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.187 --rc genhtml_branch_coverage=1 00:22:43.187 --rc genhtml_function_coverage=1 00:22:43.187 --rc genhtml_legend=1 00:22:43.187 --rc geninfo_all_blocks=1 00:22:43.187 --rc geninfo_unexecuted_blocks=1 00:22:43.187 00:22:43.187 ' 00:22:43.187 04:12:25 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.187 --rc genhtml_branch_coverage=1 00:22:43.187 --rc genhtml_function_coverage=1 00:22:43.187 --rc genhtml_legend=1 00:22:43.187 --rc geninfo_all_blocks=1 00:22:43.187 --rc geninfo_unexecuted_blocks=1 00:22:43.187 00:22:43.187 ' 00:22:43.187 04:12:25 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.187 --rc genhtml_branch_coverage=1 00:22:43.187 --rc genhtml_function_coverage=1 00:22:43.187 --rc genhtml_legend=1 00:22:43.188 --rc geninfo_all_blocks=1 00:22:43.188 --rc geninfo_unexecuted_blocks=1 00:22:43.188 00:22:43.188 ' 00:22:43.188 04:12:25 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.188 --rc genhtml_branch_coverage=1 00:22:43.188 --rc genhtml_function_coverage=1 00:22:43.188 --rc genhtml_legend=1 00:22:43.188 --rc geninfo_all_blocks=1 00:22:43.188 --rc geninfo_unexecuted_blocks=1 00:22:43.188 00:22:43.188 ' 00:22:43.188 04:12:25 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:43.188 04:12:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.188 04:12:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.188 04:12:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.188 04:12:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.188 04:12:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.188 04:12:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.188 04:12:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.188 04:12:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:43.188 04:12:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.188 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.188 04:12:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:43.188 04:12:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:43.188 04:12:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:43.188 04:12:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:43.188 04:12:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.188 04:12:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:43.188 04:12:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:43.188 Cannot find device "nvmf_init_br" 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:43.188 Cannot find device "nvmf_init_br2" 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:43.188 Cannot find device "nvmf_tgt_br" 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@164 -- # true 00:22:43.188 04:12:25 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.446 Cannot find device "nvmf_tgt_br2" 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@165 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:43.446 Cannot find device "nvmf_init_br" 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@166 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:43.446 Cannot find device "nvmf_init_br2" 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@167 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:43.446 Cannot find device "nvmf_tgt_br" 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@168 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:43.446 Cannot find device "nvmf_tgt_br2" 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@169 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:43.446 Cannot find device "nvmf_br" 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@170 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:43.446 Cannot find device "nvmf_init_if" 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@171 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:43.446 Cannot find device "nvmf_init_if2" 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@172 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:43.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@173 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:43.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@174 -- # true 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:43.446 04:12:25 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:43.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:43.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:43.704 00:22:43.704 --- 10.0.0.3 ping statistics --- 00:22:43.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.704 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:43.704 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:43.704 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:22:43.704 00:22:43.704 --- 10.0.0.4 ping statistics --- 00:22:43.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.704 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:43.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:43.704 00:22:43.704 --- 10.0.0.1 ping statistics --- 00:22:43.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.704 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:43.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:43.704 00:22:43.704 --- 10.0.0.2 ping statistics --- 00:22:43.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.704 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:43.704 04:12:25 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:43.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:43.962 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:43.962 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.962 04:12:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:43.962 04:12:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.962 04:12:25 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.962 04:12:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83495 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:43.962 04:12:25 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83495 00:22:43.962 04:12:25 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83495 ']' 00:22:43.962 04:12:25 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.962 04:12:25 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.962 04:12:25 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.962 04:12:25 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.962 04:12:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:44.220 [2024-12-09 04:12:25.967828] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:22:44.220 [2024-12-09 04:12:25.967932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.220 [2024-12-09 04:12:26.122030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.482 [2024-12-09 04:12:26.182779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.482 [2024-12-09 04:12:26.182854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.482 [2024-12-09 04:12:26.182869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.482 [2024-12-09 04:12:26.182881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.482 [2024-12-09 04:12:26.182891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.482 [2024-12-09 04:12:26.183383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.482 [2024-12-09 04:12:26.259964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:22:44.482 04:12:26 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:44.482 04:12:26 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.482 04:12:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:44.482 04:12:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:44.482 [2024-12-09 04:12:26.392026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.482 04:12:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.482 04:12:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:44.482 ************************************ 00:22:44.482 START TEST fio_dif_1_default 00:22:44.482 ************************************ 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:44.482 bdev_null0 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.482 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:44.739 [2024-12-09 04:12:26.436212] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.739 { 00:22:44.739 "params": { 00:22:44.739 "name": "Nvme$subsystem", 00:22:44.739 "trtype": "$TEST_TRANSPORT", 00:22:44.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.739 "adrfam": "ipv4", 00:22:44.739 "trsvcid": "$NVMF_PORT", 00:22:44.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.739 "hdgst": ${hdgst:-false}, 00:22:44.739 "ddgst": ${ddgst:-false} 00:22:44.739 }, 00:22:44.739 "method": "bdev_nvme_attach_controller" 00:22:44.739 } 00:22:44.739 EOF 00:22:44.739 )") 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:44.739 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:44.740 "params": { 00:22:44.740 "name": "Nvme0", 00:22:44.740 "trtype": "tcp", 00:22:44.740 "traddr": "10.0.0.3", 00:22:44.740 "adrfam": "ipv4", 00:22:44.740 "trsvcid": "4420", 00:22:44.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:44.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:44.740 "hdgst": false, 00:22:44.740 "ddgst": false 00:22:44.740 }, 00:22:44.740 "method": "bdev_nvme_attach_controller" 00:22:44.740 }' 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:44.740 04:12:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.740 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:44.740 fio-3.35 00:22:44.740 Starting 1 thread 00:22:56.973 00:22:56.973 filename0: (groupid=0, jobs=1): err= 0: pid=83554: Mon Dec 9 04:12:37 2024 00:22:56.973 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(396MiB/10001msec) 00:22:56.973 slat (usec): min=5, max=138, avg= 7.60, stdev= 3.15 00:22:56.973 clat (usec): min=315, max=2759, avg=372.75, stdev=39.91 00:22:56.973 lat (usec): min=321, max=2768, avg=380.36, stdev=40.63 00:22:56.973 clat percentiles (usec): 00:22:56.973 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:22:56.973 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 375], 00:22:56.973 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 416], 95.00th=[ 433], 00:22:56.973 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 562], 99.95th=[ 619], 00:22:56.973 | 99.99th=[ 1004] 00:22:56.973 bw ( KiB/s): min=39040, max=41504, per=100.00%, avg=40529.37, stdev=717.51, samples=19 00:22:56.973 iops : min= 9760, max=10376, avg=10132.32, stdev=179.36, samples=19 00:22:56.973 lat (usec) : 500=99.42%, 750=0.54%, 1000=0.02% 00:22:56.973 lat (msec) : 2=0.01%, 4=0.01% 00:22:56.973 cpu : usr=85.80%, sys=12.45%, ctx=25, majf=0, minf=9 00:22:56.973 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:56.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.973 issued rwts: total=101264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.973 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:56.973 00:22:56.973 Run status group 0 (all jobs): 00:22:56.973 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=396MiB (415MB), run=10001-10001msec 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.973 00:22:56.973 real 0m11.166s 00:22:56.973 user 0m9.341s 00:22:56.973 sys 0m1.557s 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.973 ************************************ 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 END TEST fio_dif_1_default 00:22:56.973 ************************************ 00:22:56.973 04:12:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:56.973 04:12:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:56.973 04:12:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.973 04:12:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 ************************************ 00:22:56.973 START TEST fio_dif_1_multi_subsystems 00:22:56.973 ************************************ 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 bdev_null0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 [2024-12-09 04:12:37.664419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:56.973 bdev_null1 00:22:56.973 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:56.974 { 00:22:56.974 "params": { 00:22:56.974 "name": "Nvme$subsystem", 00:22:56.974 "trtype": "$TEST_TRANSPORT", 00:22:56.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.974 "adrfam": "ipv4", 00:22:56.974 "trsvcid": "$NVMF_PORT", 00:22:56.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.974 "hdgst": ${hdgst:-false}, 00:22:56.974 "ddgst": ${ddgst:-false} 00:22:56.974 }, 00:22:56.974 "method": "bdev_nvme_attach_controller" 00:22:56.974 } 00:22:56.974 EOF 00:22:56.974 )") 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:56.974 { 00:22:56.974 "params": { 00:22:56.974 "name": "Nvme$subsystem", 00:22:56.974 "trtype": "$TEST_TRANSPORT", 00:22:56.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.974 "adrfam": "ipv4", 00:22:56.974 "trsvcid": "$NVMF_PORT", 00:22:56.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.974 "hdgst": ${hdgst:-false}, 00:22:56.974 "ddgst": ${ddgst:-false} 00:22:56.974 }, 00:22:56.974 "method": "bdev_nvme_attach_controller" 00:22:56.974 } 00:22:56.974 EOF 00:22:56.974 )") 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:56.974 "params": { 00:22:56.974 "name": "Nvme0", 00:22:56.974 "trtype": "tcp", 00:22:56.974 "traddr": "10.0.0.3", 00:22:56.974 "adrfam": "ipv4", 00:22:56.974 "trsvcid": "4420", 00:22:56.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:56.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:56.974 "hdgst": false, 00:22:56.974 "ddgst": false 00:22:56.974 }, 00:22:56.974 "method": "bdev_nvme_attach_controller" 00:22:56.974 },{ 00:22:56.974 "params": { 00:22:56.974 "name": "Nvme1", 00:22:56.974 "trtype": "tcp", 00:22:56.974 "traddr": "10.0.0.3", 00:22:56.974 "adrfam": "ipv4", 00:22:56.974 "trsvcid": "4420", 00:22:56.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.974 "hdgst": false, 00:22:56.974 "ddgst": false 00:22:56.974 }, 00:22:56.974 "method": "bdev_nvme_attach_controller" 00:22:56.974 }' 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:56.974 04:12:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:56.974 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:56.974 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:56.974 fio-3.35 00:22:56.974 Starting 2 threads 00:23:06.998 00:23:06.998 filename0: (groupid=0, jobs=1): err= 0: pid=83714: Mon Dec 9 04:12:48 2024 00:23:06.998 read: IOPS=5711, BW=22.3MiB/s (23.4MB/s)(223MiB/10001msec) 00:23:06.998 slat (nsec): min=6017, max=67264, avg=11709.27, stdev=3993.63 00:23:06.998 clat (usec): min=523, max=4103, avg=668.82, stdev=73.34 00:23:06.998 lat (usec): min=530, max=4137, avg=680.53, stdev=73.97 00:23:06.998 clat percentiles (usec): 00:23:06.998 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 635], 00:23:06.998 | 30.00th=[ 644], 40.00th=[ 652], 50.00th=[ 660], 60.00th=[ 668], 00:23:06.998 | 70.00th=[ 676], 80.00th=[ 693], 90.00th=[ 717], 95.00th=[ 750], 00:23:06.998 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1090], 99.95th=[ 1418], 00:23:06.998 | 99.99th=[ 2933] 00:23:06.998 bw ( KiB/s): min=18048, max=23360, per=49.97%, avg=22834.53, stdev=1227.05, samples=19 00:23:06.998 iops : min= 4512, max= 5840, avg=5708.63, stdev=306.76, samples=19 00:23:06.998 lat (usec) : 750=95.01%, 1000=4.20% 00:23:06.998 lat (msec) : 2=0.75%, 4=0.04%, 10=0.01% 00:23:06.998 cpu : usr=88.75%, sys=9.78%, ctx=12, majf=0, minf=0 00:23:06.998 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:06.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.998 issued rwts: total=57124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.998 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:06.998 filename1: (groupid=0, jobs=1): err= 0: pid=83715: Mon Dec 9 04:12:48 2024 00:23:06.998 read: IOPS=5712, BW=22.3MiB/s (23.4MB/s)(223MiB/10001msec) 00:23:06.998 slat (nsec): min=5991, max=67193, avg=11950.30, stdev=4090.45 00:23:06.998 clat (usec): min=341, max=3588, avg=667.34, stdev=67.86 00:23:06.998 lat (usec): min=347, max=3616, avg=679.29, stdev=68.22 00:23:06.998 clat percentiles (usec): 00:23:06.998 | 1.00th=[ 603], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 635], 00:23:06.998 | 30.00th=[ 644], 40.00th=[ 652], 50.00th=[ 660], 60.00th=[ 660], 00:23:06.998 | 70.00th=[ 668], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 750], 00:23:06.998 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1090], 99.95th=[ 1467], 00:23:06.998 | 99.99th=[ 2900] 00:23:06.998 bw ( KiB/s): min=18116, max=23360, per=49.98%, avg=22838.11, stdev=1212.33, samples=19 00:23:06.998 iops : min= 4529, max= 5840, avg=5709.53, stdev=303.08, samples=19 00:23:06.998 lat (usec) : 500=0.01%, 750=95.34%, 1000=3.86% 00:23:06.998 lat (msec) : 2=0.77%, 4=0.02% 00:23:06.998 cpu : usr=89.12%, sys=9.48%, ctx=11, majf=0, minf=0 00:23:06.998 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:06.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.998 issued rwts: total=57132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.998 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:06.998 00:23:06.998 Run status group 0 (all jobs): 00:23:06.998 READ: bw=44.6MiB/s (46.8MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=446MiB (468MB), run=10001-10001msec 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.998 00:23:06.998 real 0m11.237s 00:23:06.998 user 0m18.596s 00:23:06.998 sys 0m2.276s 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:06.998 ************************************ 00:23:06.998 END TEST fio_dif_1_multi_subsystems 00:23:06.998 ************************************ 00:23:06.998 04:12:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:06.998 04:12:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:06.998 04:12:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:06.998 ************************************ 00:23:06.998 START TEST fio_dif_rand_params 00:23:06.998 ************************************ 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:06.998 bdev_null0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.998 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.257 [2024-12-09 04:12:48.952936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.257 { 00:23:07.257 "params": { 00:23:07.257 "name": "Nvme$subsystem", 00:23:07.257 "trtype": "$TEST_TRANSPORT", 00:23:07.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.257 "adrfam": "ipv4", 00:23:07.257 "trsvcid": "$NVMF_PORT", 00:23:07.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.257 "hdgst": ${hdgst:-false}, 00:23:07.257 "ddgst": ${ddgst:-false} 00:23:07.257 }, 00:23:07.257 "method": "bdev_nvme_attach_controller" 00:23:07.257 } 00:23:07.257 EOF 00:23:07.257 )") 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:07.257 "params": { 00:23:07.257 "name": "Nvme0", 00:23:07.257 "trtype": "tcp", 00:23:07.257 "traddr": "10.0.0.3", 00:23:07.257 "adrfam": "ipv4", 00:23:07.257 "trsvcid": "4420", 00:23:07.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:07.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:07.257 "hdgst": false, 00:23:07.257 "ddgst": false 00:23:07.257 }, 00:23:07.257 "method": "bdev_nvme_attach_controller" 00:23:07.257 }' 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:07.257 04:12:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:07.257 04:12:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:07.257 04:12:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:07.257 04:12:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:07.257 04:12:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.257 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:07.257 ... 00:23:07.257 fio-3.35 00:23:07.257 Starting 3 threads 00:23:13.815 00:23:13.815 filename0: (groupid=0, jobs=1): err= 0: pid=83872: Mon Dec 9 04:12:54 2024 00:23:13.815 read: IOPS=305, BW=38.2MiB/s (40.0MB/s)(191MiB/5008msec) 00:23:13.815 slat (nsec): min=6061, max=97200, avg=21667.05, stdev=12577.25 00:23:13.815 clat (usec): min=3585, max=11333, avg=9768.35, stdev=357.49 00:23:13.815 lat (usec): min=3595, max=11345, avg=9790.02, stdev=358.40 00:23:13.815 clat percentiles (usec): 00:23:13.815 | 1.00th=[ 9372], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:23:13.815 | 30.00th=[ 9765], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9765], 00:23:13.815 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[ 9896], 95.00th=[10028], 00:23:13.815 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11338], 99.95th=[11338], 00:23:13.815 | 99.99th=[11338] 00:23:13.815 bw ( KiB/s): min=38400, max=39936, per=33.40%, avg=39083.30, stdev=435.12, samples=10 00:23:13.815 iops : min= 300, max= 312, avg=305.30, stdev= 3.40, samples=10 00:23:13.815 lat (msec) : 4=0.20%, 10=93.27%, 20=6.54% 00:23:13.815 cpu : usr=94.01%, sys=5.43%, ctx=8, majf=0, minf=0 00:23:13.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:13.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.815 issued rwts: total=1530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:13.815 filename0: (groupid=0, jobs=1): err= 0: pid=83873: Mon Dec 9 04:12:54 2024 00:23:13.815 read: IOPS=304, BW=38.1MiB/s (39.9MB/s)(191MiB/5001msec) 00:23:13.815 slat (nsec): min=6128, max=96760, avg=22117.19, stdev=12434.27 00:23:13.815 clat (usec): min=9527, max=11674, avg=9796.09, stdev=201.04 00:23:13.815 lat (usec): min=9538, max=11701, avg=9818.21, stdev=201.50 00:23:13.815 clat percentiles (usec): 00:23:13.815 | 1.00th=[ 9634], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:23:13.815 | 30.00th=[ 9765], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9765], 00:23:13.815 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[ 9896], 95.00th=[10028], 00:23:13.815 | 99.00th=[10814], 99.50th=[10814], 99.90th=[11600], 99.95th=[11731], 00:23:13.815 | 99.99th=[11731] 00:23:13.815 bw ( KiB/s): min=38400, max=39168, per=33.33%, avg=38997.33, stdev=338.66, samples=9 00:23:13.815 iops : min= 300, max= 306, avg=304.67, stdev= 2.65, samples=9 00:23:13.815 lat (msec) : 10=92.91%, 20=7.09% 00:23:13.815 cpu : usr=94.28%, sys=5.22%, ctx=4, majf=0, minf=0 00:23:13.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:13.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.815 issued rwts: total=1524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:13.815 filename0: (groupid=0, jobs=1): err= 0: pid=83874: Mon Dec 9 04:12:54 2024 00:23:13.815 read: IOPS=304, BW=38.1MiB/s (39.9MB/s)(191MiB/5001msec) 00:23:13.815 slat (nsec): min=6014, max=83811, avg=20451.38, stdev=11338.25 00:23:13.815 clat (usec): min=9529, max=11514, avg=9797.93, stdev=197.69 00:23:13.815 lat (usec): min=9536, max=11544, avg=9818.38, stdev=198.28 00:23:13.815 clat percentiles (usec): 00:23:13.815 | 1.00th=[ 9634], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:23:13.815 | 30.00th=[ 9765], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9765], 00:23:13.815 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[ 9896], 95.00th=[10028], 00:23:13.815 | 99.00th=[10814], 99.50th=[10814], 99.90th=[11469], 99.95th=[11469], 00:23:13.815 | 99.99th=[11469] 00:23:13.815 bw ( KiB/s): min=38400, max=39168, per=33.33%, avg=38997.33, stdev=338.66, samples=9 00:23:13.815 iops : min= 300, max= 306, avg=304.67, stdev= 2.65, samples=9 00:23:13.815 lat (msec) : 10=93.31%, 20=6.69% 00:23:13.815 cpu : usr=95.44%, sys=4.00%, ctx=81, majf=0, minf=0 00:23:13.815 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:13.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.815 issued rwts: total=1524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.815 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:13.815 00:23:13.816 Run status group 0 (all jobs): 00:23:13.816 READ: bw=114MiB/s (120MB/s), 38.1MiB/s-38.2MiB/s (39.9MB/s-40.0MB/s), io=572MiB (600MB), run=5001-5008msec 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 bdev_null0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 [2024-12-09 04:12:55.102559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 bdev_null1 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 bdev_null2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.816 { 00:23:13.816 "params": { 00:23:13.816 "name": "Nvme$subsystem", 00:23:13.816 "trtype": "$TEST_TRANSPORT", 00:23:13.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.816 "adrfam": "ipv4", 00:23:13.816 "trsvcid": "$NVMF_PORT", 00:23:13.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.816 "hdgst": ${hdgst:-false}, 00:23:13.816 "ddgst": ${ddgst:-false} 00:23:13.816 }, 00:23:13.816 "method": "bdev_nvme_attach_controller" 00:23:13.816 } 00:23:13.816 EOF 00:23:13.816 )") 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.816 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.817 { 00:23:13.817 "params": { 00:23:13.817 "name": "Nvme$subsystem", 00:23:13.817 "trtype": "$TEST_TRANSPORT", 00:23:13.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.817 "adrfam": "ipv4", 00:23:13.817 "trsvcid": "$NVMF_PORT", 00:23:13.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.817 "hdgst": ${hdgst:-false}, 00:23:13.817 "ddgst": ${ddgst:-false} 00:23:13.817 }, 00:23:13.817 "method": "bdev_nvme_attach_controller" 00:23:13.817 } 00:23:13.817 EOF 00:23:13.817 )") 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.817 { 00:23:13.817 "params": { 00:23:13.817 "name": "Nvme$subsystem", 00:23:13.817 "trtype": "$TEST_TRANSPORT", 00:23:13.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.817 "adrfam": "ipv4", 00:23:13.817 "trsvcid": "$NVMF_PORT", 00:23:13.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.817 "hdgst": ${hdgst:-false}, 00:23:13.817 "ddgst": ${ddgst:-false} 00:23:13.817 }, 00:23:13.817 "method": "bdev_nvme_attach_controller" 00:23:13.817 } 00:23:13.817 EOF 00:23:13.817 )") 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:13.817 "params": { 00:23:13.817 "name": "Nvme0", 00:23:13.817 "trtype": "tcp", 00:23:13.817 "traddr": "10.0.0.3", 00:23:13.817 "adrfam": "ipv4", 00:23:13.817 "trsvcid": "4420", 00:23:13.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:13.817 "hdgst": false, 00:23:13.817 "ddgst": false 00:23:13.817 }, 00:23:13.817 "method": "bdev_nvme_attach_controller" 00:23:13.817 },{ 00:23:13.817 "params": { 00:23:13.817 "name": "Nvme1", 00:23:13.817 "trtype": "tcp", 00:23:13.817 "traddr": "10.0.0.3", 00:23:13.817 "adrfam": "ipv4", 00:23:13.817 "trsvcid": "4420", 00:23:13.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.817 "hdgst": false, 00:23:13.817 "ddgst": false 00:23:13.817 }, 00:23:13.817 "method": "bdev_nvme_attach_controller" 00:23:13.817 },{ 00:23:13.817 "params": { 00:23:13.817 "name": "Nvme2", 00:23:13.817 "trtype": "tcp", 00:23:13.817 "traddr": "10.0.0.3", 00:23:13.817 "adrfam": "ipv4", 00:23:13.817 "trsvcid": "4420", 00:23:13.817 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:13.817 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:13.817 "hdgst": false, 00:23:13.817 "ddgst": false 00:23:13.817 }, 00:23:13.817 "method": "bdev_nvme_attach_controller" 00:23:13.817 }' 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:13.817 04:12:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.817 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:13.817 ... 00:23:13.817 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:13.817 ... 00:23:13.817 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:13.817 ... 00:23:13.817 fio-3.35 00:23:13.817 Starting 24 threads 00:23:26.017 00:23:26.017 filename0: (groupid=0, jobs=1): err= 0: pid=83975: Mon Dec 9 04:13:06 2024 00:23:26.017 read: IOPS=259, BW=1036KiB/s (1061kB/s)(10.2MiB/10051msec) 00:23:26.017 slat (usec): min=5, max=7053, avg=26.17, stdev=179.37 00:23:26.017 clat (msec): min=10, max=127, avg=61.56, stdev=18.14 00:23:26.017 lat (msec): min=10, max=127, avg=61.59, stdev=18.13 00:23:26.017 clat percentiles (msec): 00:23:26.017 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 45], 00:23:26.017 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 66], 00:23:26.017 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:23:26.017 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 120], 00:23:26.017 | 99.99th=[ 128] 00:23:26.017 bw ( KiB/s): min= 737, max= 1523, per=4.22%, avg=1035.25, stdev=166.46, samples=20 00:23:26.017 iops : min= 184, max= 380, avg=258.60, stdev=41.50, samples=20 00:23:26.017 lat (msec) : 20=0.69%, 50=31.49%, 100=65.36%, 250=2.46% 00:23:26.017 cpu : usr=40.68%, sys=1.72%, ctx=1309, majf=0, minf=9 00:23:26.017 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:26.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.017 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.017 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.017 filename0: (groupid=0, jobs=1): err= 0: pid=83976: Mon Dec 9 04:13:06 2024 00:23:26.017 read: IOPS=266, BW=1067KiB/s (1093kB/s)(10.4MiB/10006msec) 00:23:26.017 slat (usec): min=3, max=8044, avg=30.50, stdev=268.73 00:23:26.017 clat (msec): min=9, max=109, avg=59.84, stdev=17.34 00:23:26.017 lat (msec): min=9, max=109, avg=59.87, stdev=17.35 00:23:26.017 clat percentiles (msec): 00:23:26.017 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 47], 00:23:26.017 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 62], 00:23:26.017 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:23:26.017 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 110], 00:23:26.017 | 99.99th=[ 110] 00:23:26.017 bw ( KiB/s): min= 792, max= 1176, per=4.29%, avg=1053.89, stdev=114.20, samples=19 00:23:26.017 iops : min= 198, max= 294, avg=263.47, stdev=28.55, samples=19 00:23:26.017 lat (msec) : 10=0.22%, 20=0.52%, 50=35.22%, 100=61.86%, 250=2.17% 00:23:26.017 cpu : usr=34.43%, sys=1.11%, ctx=921, majf=0, minf=9 00:23:26.017 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:26.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.017 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.017 issued rwts: total=2669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.017 filename0: (groupid=0, jobs=1): err= 0: pid=83977: Mon Dec 9 04:13:06 2024 00:23:26.017 read: IOPS=258, BW=1034KiB/s (1059kB/s)(10.2MiB/10056msec) 00:23:26.017 slat (usec): min=3, max=8031, avg=30.20, stdev=276.68 00:23:26.017 clat (msec): min=6, max=127, avg=61.70, stdev=18.77 00:23:26.018 lat (msec): min=6, max=127, avg=61.73, stdev=18.77 00:23:26.018 clat percentiles (msec): 00:23:26.018 | 1.00th=[ 12], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:23:26.018 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 67], 00:23:26.018 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 96], 00:23:26.018 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 117], 99.95th=[ 117], 00:23:26.018 | 99.99th=[ 128] 00:23:26.018 bw ( KiB/s): min= 736, max= 1785, per=4.22%, avg=1034.45, stdev=208.64, samples=20 00:23:26.018 iops : min= 184, max= 446, avg=258.60, stdev=52.11, samples=20 00:23:26.018 lat (msec) : 10=0.62%, 20=3.08%, 50=23.23%, 100=70.04%, 250=3.04% 00:23:26.018 cpu : usr=36.10%, sys=1.17%, ctx=1067, majf=0, minf=9 00:23:26.018 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:23:26.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 issued rwts: total=2600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.018 filename0: (groupid=0, jobs=1): err= 0: pid=83978: Mon Dec 9 04:13:06 2024 00:23:26.018 read: IOPS=261, BW=1047KiB/s (1072kB/s)(10.3MiB/10035msec) 00:23:26.018 slat (usec): min=5, max=8054, avg=33.00, stdev=285.37 00:23:26.018 clat (msec): min=23, max=112, avg=60.96, stdev=16.59 00:23:26.018 lat (msec): min=23, max=112, avg=60.99, stdev=16.59 00:23:26.018 clat percentiles (msec): 00:23:26.018 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 46], 00:23:26.018 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:23:26.018 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:23:26.018 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 112], 99.95th=[ 112], 00:23:26.018 | 99.99th=[ 112] 00:23:26.018 bw ( KiB/s): min= 744, max= 1192, per=4.26%, avg=1044.40, stdev=118.59, samples=20 00:23:26.018 iops : min= 186, max= 298, avg=261.10, stdev=29.65, samples=20 00:23:26.018 lat (msec) : 50=31.67%, 100=66.84%, 250=1.48% 00:23:26.018 cpu : usr=35.88%, sys=1.25%, ctx=1137, majf=0, minf=9 00:23:26.018 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:26.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 issued rwts: total=2627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.018 filename0: (groupid=0, jobs=1): err= 0: pid=83979: Mon Dec 9 04:13:06 2024 00:23:26.018 read: IOPS=252, BW=1011KiB/s (1035kB/s)(9.95MiB/10074msec) 00:23:26.018 slat (usec): min=4, max=11055, avg=31.94, stdev=332.94 00:23:26.018 clat (msec): min=3, max=132, avg=63.10, stdev=21.12 00:23:26.018 lat (msec): min=3, max=132, avg=63.14, stdev=21.12 00:23:26.018 clat percentiles (msec): 00:23:26.018 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 40], 20.00th=[ 48], 00:23:26.018 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 69], 00:23:26.018 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 92], 95.00th=[ 97], 00:23:26.018 | 99.00th=[ 109], 99.50th=[ 113], 99.90th=[ 127], 99.95th=[ 130], 00:23:26.018 | 99.99th=[ 132] 00:23:26.018 bw ( KiB/s): min= 632, max= 2163, per=4.12%, avg=1011.35, stdev=304.76, samples=20 00:23:26.018 iops : min= 158, max= 540, avg=252.80, stdev=76.04, samples=20 00:23:26.018 lat (msec) : 4=0.63%, 10=1.96%, 20=3.61%, 50=18.42%, 100=72.31% 00:23:26.018 lat (msec) : 250=3.06% 00:23:26.018 cpu : usr=36.89%, sys=1.36%, ctx=1044, majf=0, minf=9 00:23:26.018 IO depths : 1=0.2%, 2=1.2%, 4=4.2%, 8=78.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:26.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 complete : 0=0.0%, 4=89.0%, 8=10.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 issued rwts: total=2546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.018 filename0: (groupid=0, jobs=1): err= 0: pid=83980: Mon Dec 9 04:13:06 2024 00:23:26.018 read: IOPS=263, BW=1053KiB/s (1079kB/s)(10.3MiB/10007msec) 00:23:26.018 slat (usec): min=6, max=8053, avg=38.35, stdev=357.55 00:23:26.018 clat (msec): min=9, max=118, avg=60.60, stdev=17.13 00:23:26.018 lat (msec): min=9, max=118, avg=60.63, stdev=17.12 00:23:26.018 clat percentiles (msec): 00:23:26.018 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 00:23:26.018 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:23:26.018 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:23:26.018 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 113], 99.95th=[ 118], 00:23:26.018 | 99.99th=[ 120] 00:23:26.018 bw ( KiB/s): min= 792, max= 1176, per=4.23%, avg=1037.89, stdev=118.83, samples=19 00:23:26.018 iops : min= 198, max= 294, avg=259.47, stdev=29.71, samples=19 00:23:26.018 lat (msec) : 10=0.23%, 20=0.15%, 50=33.40%, 100=63.49%, 250=2.73% 00:23:26.018 cpu : usr=33.23%, sys=1.18%, ctx=994, majf=0, minf=9 00:23:26.018 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:26.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 issued rwts: total=2635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.018 filename0: (groupid=0, jobs=1): err= 0: pid=83981: Mon Dec 9 04:13:06 2024 00:23:26.018 read: IOPS=264, BW=1058KiB/s (1083kB/s)(10.3MiB/10004msec) 00:23:26.018 slat (usec): min=6, max=8032, avg=39.85, stdev=330.23 00:23:26.018 clat (msec): min=7, max=119, avg=60.34, stdev=17.49 00:23:26.018 lat (msec): min=7, max=119, avg=60.38, stdev=17.50 00:23:26.018 clat percentiles (msec): 00:23:26.018 | 1.00th=[ 28], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 45], 00:23:26.018 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 64], 00:23:26.018 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 96], 00:23:26.018 | 99.00th=[ 107], 99.50th=[ 116], 99.90th=[ 116], 99.95th=[ 120], 00:23:26.018 | 99.99th=[ 120] 00:23:26.018 bw ( KiB/s): min= 696, max= 1176, per=4.24%, avg=1040.89, stdev=127.41, samples=19 00:23:26.018 iops : min= 174, max= 294, avg=260.21, stdev=31.87, samples=19 00:23:26.018 lat (msec) : 10=0.23%, 20=0.15%, 50=33.99%, 100=62.68%, 250=2.95% 00:23:26.018 cpu : usr=41.24%, sys=1.50%, ctx=1141, majf=0, minf=9 00:23:26.018 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:26.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 issued rwts: total=2645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.018 filename0: (groupid=0, jobs=1): err= 0: pid=83982: Mon Dec 9 04:13:06 2024 00:23:26.018 read: IOPS=265, BW=1064KiB/s (1089kB/s)(10.4MiB/10002msec) 00:23:26.018 slat (usec): min=4, max=8040, avg=40.02, stdev=349.18 00:23:26.018 clat (msec): min=3, max=108, avg=59.98, stdev=17.15 00:23:26.018 lat (msec): min=3, max=108, avg=60.02, stdev=17.15 00:23:26.018 clat percentiles (msec): 00:23:26.018 | 1.00th=[ 23], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 45], 00:23:26.018 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 64], 00:23:26.018 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:23:26.018 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 00:23:26.018 | 99.99th=[ 109] 00:23:26.018 bw ( KiB/s): min= 816, max= 1224, per=4.25%, avg=1043.37, stdev=126.98, samples=19 00:23:26.018 iops : min= 204, max= 306, avg=260.84, stdev=31.74, samples=19 00:23:26.018 lat (msec) : 4=0.23%, 10=0.26%, 20=0.45%, 50=34.40%, 100=62.44% 00:23:26.018 lat (msec) : 250=2.22% 00:23:26.018 cpu : usr=44.47%, sys=1.56%, ctx=1378, majf=0, minf=9 00:23:26.018 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:26.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 issued rwts: total=2660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.018 filename1: (groupid=0, jobs=1): err= 0: pid=83983: Mon Dec 9 04:13:06 2024 00:23:26.018 read: IOPS=258, BW=1032KiB/s (1057kB/s)(10.1MiB/10023msec) 00:23:26.018 slat (usec): min=5, max=8032, avg=29.89, stdev=219.35 00:23:26.018 clat (msec): min=17, max=128, avg=61.86, stdev=16.88 00:23:26.018 lat (msec): min=17, max=128, avg=61.89, stdev=16.88 00:23:26.018 clat percentiles (msec): 00:23:26.018 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 46], 00:23:26.018 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 66], 00:23:26.018 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 96], 00:23:26.018 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 129], 00:23:26.018 | 99.99th=[ 129] 00:23:26.018 bw ( KiB/s): min= 776, max= 1184, per=4.16%, avg=1020.68, stdev=123.61, samples=19 00:23:26.018 iops : min= 194, max= 296, avg=255.16, stdev=30.91, samples=19 00:23:26.018 lat (msec) : 20=0.23%, 50=29.04%, 100=67.25%, 250=3.48% 00:23:26.018 cpu : usr=41.50%, sys=1.66%, ctx=1219, majf=0, minf=9 00:23:26.018 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:26.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.018 issued rwts: total=2586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.018 filename1: (groupid=0, jobs=1): err= 0: pid=83984: Mon Dec 9 04:13:06 2024 00:23:26.018 read: IOPS=252, BW=1010KiB/s (1034kB/s)(9.89MiB/10035msec) 00:23:26.018 slat (usec): min=4, max=8039, avg=32.90, stdev=318.43 00:23:26.018 clat (msec): min=23, max=133, avg=63.21, stdev=16.80 00:23:26.018 lat (msec): min=23, max=133, avg=63.25, stdev=16.81 00:23:26.018 clat percentiles (msec): 00:23:26.018 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:23:26.018 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 69], 00:23:26.018 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 96], 00:23:26.018 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 123], 99.95th=[ 130], 00:23:26.018 | 99.99th=[ 134] 00:23:26.018 bw ( KiB/s): min= 736, max= 1272, per=4.11%, avg=1008.80, stdev=121.49, samples=20 00:23:26.018 iops : min= 184, max= 318, avg=252.15, stdev=30.38, samples=20 00:23:26.018 lat (msec) : 50=26.61%, 100=70.94%, 250=2.45% 00:23:26.019 cpu : usr=31.97%, sys=1.20%, ctx=858, majf=0, minf=9 00:23:26.019 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=81.9%, 16=16.7%, 32=0.0%, >=64=0.0% 00:23:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 issued rwts: total=2533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.019 filename1: (groupid=0, jobs=1): err= 0: pid=83985: Mon Dec 9 04:13:06 2024 00:23:26.019 read: IOPS=252, BW=1009KiB/s (1033kB/s)(9.88MiB/10020msec) 00:23:26.019 slat (usec): min=5, max=5020, avg=24.19, stdev=141.41 00:23:26.019 clat (msec): min=21, max=143, avg=63.29, stdev=17.79 00:23:26.019 lat (msec): min=21, max=143, avg=63.32, stdev=17.80 00:23:26.019 clat percentiles (msec): 00:23:26.019 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 47], 00:23:26.019 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 68], 00:23:26.019 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 86], 95.00th=[ 99], 00:23:26.019 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 128], 99.95th=[ 144], 00:23:26.019 | 99.99th=[ 144] 00:23:26.019 bw ( KiB/s): min= 640, max= 1120, per=4.06%, avg=995.79, stdev=142.52, samples=19 00:23:26.019 iops : min= 160, max= 280, avg=248.95, stdev=35.63, samples=19 00:23:26.019 lat (msec) : 50=27.89%, 100=68.28%, 250=3.84% 00:23:26.019 cpu : usr=40.02%, sys=1.51%, ctx=1293, majf=0, minf=9 00:23:26.019 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 issued rwts: total=2528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.019 filename1: (groupid=0, jobs=1): err= 0: pid=83986: Mon Dec 9 04:13:06 2024 00:23:26.019 read: IOPS=250, BW=1004KiB/s (1028kB/s)(9.86MiB/10053msec) 00:23:26.019 slat (usec): min=5, max=8023, avg=28.36, stdev=222.65 00:23:26.019 clat (msec): min=9, max=132, avg=63.56, stdev=19.11 00:23:26.019 lat (msec): min=9, max=132, avg=63.59, stdev=19.11 00:23:26.019 clat percentiles (msec): 00:23:26.019 | 1.00th=[ 14], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 47], 00:23:26.019 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:23:26.019 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 96], 00:23:26.019 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 133], 00:23:26.019 | 99.99th=[ 133] 00:23:26.019 bw ( KiB/s): min= 680, max= 1405, per=4.09%, avg=1003.85, stdev=163.35, samples=20 00:23:26.019 iops : min= 170, max= 351, avg=250.95, stdev=40.80, samples=20 00:23:26.019 lat (msec) : 10=0.08%, 20=1.82%, 50=23.78%, 100=70.59%, 250=3.73% 00:23:26.019 cpu : usr=41.00%, sys=1.65%, ctx=1232, majf=0, minf=9 00:23:26.019 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.019 filename1: (groupid=0, jobs=1): err= 0: pid=83987: Mon Dec 9 04:13:06 2024 00:23:26.019 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10076msec) 00:23:26.019 slat (usec): min=3, max=10028, avg=28.96, stdev=289.82 00:23:26.019 clat (usec): min=1271, max=135900, avg=61313.91, stdev=25132.07 00:23:26.019 lat (usec): min=1278, max=135920, avg=61342.87, stdev=25139.28 00:23:26.019 clat percentiles (usec): 00:23:26.019 | 1.00th=[ 1385], 5.00th=[ 2802], 10.00th=[ 17957], 20.00th=[ 46924], 00:23:26.019 | 30.00th=[ 55837], 40.00th=[ 61604], 50.00th=[ 64226], 60.00th=[ 67634], 00:23:26.019 | 70.00th=[ 71828], 80.00th=[ 80217], 90.00th=[ 88605], 95.00th=[ 99091], 00:23:26.019 | 99.00th=[112722], 99.50th=[127402], 99.90th=[135267], 99.95th=[135267], 00:23:26.019 | 99.99th=[135267] 00:23:26.019 bw ( KiB/s): min= 656, max= 3174, per=4.24%, avg=1041.50, stdev=520.52, samples=20 00:23:26.019 iops : min= 164, max= 793, avg=260.35, stdev=130.02, samples=20 00:23:26.019 lat (msec) : 2=2.45%, 4=3.59%, 10=0.76%, 20=4.05%, 50=12.80% 00:23:26.019 lat (msec) : 100=71.76%, 250=4.59% 00:23:26.019 cpu : usr=43.71%, sys=1.67%, ctx=1615, majf=0, minf=0 00:23:26.019 IO depths : 1=0.3%, 2=2.3%, 4=8.2%, 8=73.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 complete : 0=0.0%, 4=90.0%, 8=8.2%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 issued rwts: total=2617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.019 filename1: (groupid=0, jobs=1): err= 0: pid=83988: Mon Dec 9 04:13:06 2024 00:23:26.019 read: IOPS=256, BW=1026KiB/s (1051kB/s)(10.1MiB/10060msec) 00:23:26.019 slat (usec): min=6, max=8057, avg=29.12, stdev=262.35 00:23:26.019 clat (msec): min=11, max=131, avg=62.11, stdev=18.03 00:23:26.019 lat (msec): min=11, max=131, avg=62.14, stdev=18.04 00:23:26.019 clat percentiles (msec): 00:23:26.019 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:23:26.019 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 66], 00:23:26.019 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 95], 00:23:26.019 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 129], 00:23:26.019 | 99.99th=[ 132] 00:23:26.019 bw ( KiB/s): min= 720, max= 1428, per=4.19%, avg=1028.20, stdev=148.52, samples=20 00:23:26.019 iops : min= 180, max= 357, avg=257.05, stdev=37.13, samples=20 00:23:26.019 lat (msec) : 20=2.01%, 50=26.19%, 100=69.31%, 250=2.48% 00:23:26.019 cpu : usr=42.04%, sys=1.82%, ctx=1349, majf=0, minf=9 00:23:26.019 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=81.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 issued rwts: total=2581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.019 filename1: (groupid=0, jobs=1): err= 0: pid=83989: Mon Dec 9 04:13:06 2024 00:23:26.019 read: IOPS=240, BW=961KiB/s (984kB/s)(9636KiB/10023msec) 00:23:26.019 slat (usec): min=4, max=8049, avg=32.52, stdev=326.32 00:23:26.019 clat (msec): min=23, max=130, avg=66.32, stdev=16.78 00:23:26.019 lat (msec): min=23, max=130, avg=66.36, stdev=16.79 00:23:26.019 clat percentiles (msec): 00:23:26.019 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 50], 00:23:26.019 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:23:26.019 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 91], 95.00th=[ 97], 00:23:26.019 | 99.00th=[ 109], 99.50th=[ 120], 99.90th=[ 130], 99.95th=[ 131], 00:23:26.019 | 99.99th=[ 131] 00:23:26.019 bw ( KiB/s): min= 640, max= 1104, per=3.91%, avg=959.85, stdev=128.50, samples=20 00:23:26.019 iops : min= 160, max= 276, avg=239.95, stdev=32.12, samples=20 00:23:26.019 lat (msec) : 50=20.42%, 100=75.30%, 250=4.28% 00:23:26.019 cpu : usr=34.16%, sys=1.26%, ctx=916, majf=0, minf=9 00:23:26.019 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=76.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 complete : 0=0.0%, 4=89.3%, 8=9.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 issued rwts: total=2409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.019 filename1: (groupid=0, jobs=1): err= 0: pid=83990: Mon Dec 9 04:13:06 2024 00:23:26.019 read: IOPS=257, BW=1030KiB/s (1055kB/s)(10.1MiB/10031msec) 00:23:26.019 slat (usec): min=5, max=8072, avg=47.94, stdev=466.75 00:23:26.019 clat (msec): min=11, max=120, avg=61.90, stdev=16.95 00:23:26.019 lat (msec): min=11, max=120, avg=61.94, stdev=16.95 00:23:26.019 clat percentiles (msec): 00:23:26.019 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:23:26.019 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:23:26.019 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:23:26.019 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:23:26.019 | 99.99th=[ 121] 00:23:26.019 bw ( KiB/s): min= 768, max= 1152, per=4.18%, avg=1026.80, stdev=123.48, samples=20 00:23:26.019 iops : min= 192, max= 288, avg=256.70, stdev=30.87, samples=20 00:23:26.019 lat (msec) : 20=0.08%, 50=30.08%, 100=66.28%, 250=3.56% 00:23:26.019 cpu : usr=32.00%, sys=1.24%, ctx=864, majf=0, minf=9 00:23:26.019 IO depths : 1=0.1%, 2=0.8%, 4=2.9%, 8=80.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 issued rwts: total=2583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.019 filename2: (groupid=0, jobs=1): err= 0: pid=83991: Mon Dec 9 04:13:06 2024 00:23:26.019 read: IOPS=259, BW=1038KiB/s (1063kB/s)(10.2MiB/10022msec) 00:23:26.019 slat (usec): min=5, max=8046, avg=45.44, stdev=383.11 00:23:26.019 clat (msec): min=24, max=131, avg=61.40, stdev=16.56 00:23:26.019 lat (msec): min=24, max=131, avg=61.45, stdev=16.56 00:23:26.019 clat percentiles (msec): 00:23:26.019 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 46], 00:23:26.019 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 65], 00:23:26.019 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:23:26.019 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 115], 99.95th=[ 121], 00:23:26.019 | 99.99th=[ 132] 00:23:26.019 bw ( KiB/s): min= 800, max= 1152, per=4.22%, avg=1036.80, stdev=111.26, samples=20 00:23:26.019 iops : min= 200, max= 288, avg=259.20, stdev=27.81, samples=20 00:23:26.019 lat (msec) : 50=30.33%, 100=67.24%, 250=2.42% 00:23:26.019 cpu : usr=42.50%, sys=1.62%, ctx=1361, majf=0, minf=9 00:23:26.019 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.019 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.019 filename2: (groupid=0, jobs=1): err= 0: pid=83992: Mon Dec 9 04:13:06 2024 00:23:26.019 read: IOPS=251, BW=1007KiB/s (1031kB/s)(9.89MiB/10056msec) 00:23:26.020 slat (usec): min=6, max=8031, avg=29.06, stdev=251.53 00:23:26.020 clat (msec): min=7, max=119, avg=63.30, stdev=18.30 00:23:26.020 lat (msec): min=7, max=119, avg=63.33, stdev=18.31 00:23:26.020 clat percentiles (msec): 00:23:26.020 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 48], 00:23:26.020 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 69], 00:23:26.020 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 95], 00:23:26.020 | 99.00th=[ 106], 99.50th=[ 106], 99.90th=[ 116], 99.95th=[ 118], 00:23:26.020 | 99.99th=[ 121] 00:23:26.020 bw ( KiB/s): min= 736, max= 1714, per=4.11%, avg=1008.50, stdev=196.57, samples=20 00:23:26.020 iops : min= 184, max= 428, avg=252.10, stdev=49.05, samples=20 00:23:26.020 lat (msec) : 10=0.55%, 20=2.61%, 50=20.38%, 100=74.41%, 250=2.05% 00:23:26.020 cpu : usr=32.60%, sys=1.13%, ctx=993, majf=0, minf=9 00:23:26.020 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.2%, 16=17.0%, 32=0.0%, >=64=0.0% 00:23:26.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 complete : 0=0.0%, 4=88.3%, 8=11.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.020 filename2: (groupid=0, jobs=1): err= 0: pid=83993: Mon Dec 9 04:13:06 2024 00:23:26.020 read: IOPS=258, BW=1035KiB/s (1060kB/s)(10.1MiB/10041msec) 00:23:26.020 slat (usec): min=6, max=9034, avg=35.92, stdev=321.59 00:23:26.020 clat (msec): min=23, max=125, avg=61.62, stdev=17.12 00:23:26.020 lat (msec): min=23, max=125, avg=61.65, stdev=17.12 00:23:26.020 clat percentiles (msec): 00:23:26.020 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:23:26.020 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 65], 00:23:26.020 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:23:26.020 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 120], 99.95th=[ 123], 00:23:26.020 | 99.99th=[ 126] 00:23:26.020 bw ( KiB/s): min= 792, max= 1368, per=4.22%, avg=1034.80, stdev=136.27, samples=20 00:23:26.020 iops : min= 198, max= 342, avg=258.65, stdev=34.08, samples=20 00:23:26.020 lat (msec) : 50=30.56%, 100=66.86%, 250=2.58% 00:23:26.020 cpu : usr=32.93%, sys=1.37%, ctx=991, majf=0, minf=9 00:23:26.020 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:26.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 issued rwts: total=2598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.020 filename2: (groupid=0, jobs=1): err= 0: pid=83994: Mon Dec 9 04:13:06 2024 00:23:26.020 read: IOPS=248, BW=994KiB/s (1018kB/s)(9964KiB/10024msec) 00:23:26.020 slat (usec): min=3, max=9048, avg=36.35, stdev=296.63 00:23:26.020 clat (msec): min=23, max=137, avg=64.14, stdev=18.96 00:23:26.020 lat (msec): min=23, max=137, avg=64.18, stdev=18.96 00:23:26.020 clat percentiles (msec): 00:23:26.020 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 00:23:26.020 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 68], 00:23:26.020 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 99], 00:23:26.020 | 99.00th=[ 120], 99.50th=[ 127], 99.90th=[ 130], 99.95th=[ 138], 00:23:26.020 | 99.99th=[ 138] 00:23:26.020 bw ( KiB/s): min= 640, max= 1152, per=4.00%, avg=980.21, stdev=165.06, samples=19 00:23:26.020 iops : min= 160, max= 288, avg=245.05, stdev=41.27, samples=19 00:23:26.020 lat (msec) : 50=28.10%, 100=67.24%, 250=4.66% 00:23:26.020 cpu : usr=39.25%, sys=1.55%, ctx=1251, majf=0, minf=9 00:23:26.020 IO depths : 1=0.1%, 2=2.2%, 4=8.6%, 8=74.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:23:26.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 complete : 0=0.0%, 4=89.5%, 8=8.6%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 issued rwts: total=2491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.020 filename2: (groupid=0, jobs=1): err= 0: pid=83995: Mon Dec 9 04:13:06 2024 00:23:26.020 read: IOPS=251, BW=1006KiB/s (1030kB/s)(9.87MiB/10049msec) 00:23:26.020 slat (usec): min=6, max=8031, avg=33.86, stdev=318.03 00:23:26.020 clat (msec): min=14, max=134, avg=63.38, stdev=17.54 00:23:26.020 lat (msec): min=14, max=134, avg=63.42, stdev=17.54 00:23:26.020 clat percentiles (msec): 00:23:26.020 | 1.00th=[ 20], 5.00th=[ 38], 10.00th=[ 43], 20.00th=[ 48], 00:23:26.020 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:23:26.020 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 86], 95.00th=[ 97], 00:23:26.020 | 99.00th=[ 106], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 134], 00:23:26.020 | 99.99th=[ 134] 00:23:26.020 bw ( KiB/s): min= 648, max= 1389, per=4.09%, avg=1004.85, stdev=152.37, samples=20 00:23:26.020 iops : min= 162, max= 347, avg=251.05, stdev=38.05, samples=20 00:23:26.020 lat (msec) : 20=1.19%, 50=23.78%, 100=72.02%, 250=3.01% 00:23:26.020 cpu : usr=37.58%, sys=1.40%, ctx=1126, majf=0, minf=9 00:23:26.020 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:26.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 issued rwts: total=2527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.020 filename2: (groupid=0, jobs=1): err= 0: pid=83996: Mon Dec 9 04:13:06 2024 00:23:26.020 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10017msec) 00:23:26.020 slat (usec): min=5, max=4045, avg=25.89, stdev=143.87 00:23:26.020 clat (msec): min=17, max=125, avg=61.48, stdev=17.05 00:23:26.020 lat (msec): min=17, max=125, avg=61.50, stdev=17.05 00:23:26.020 clat percentiles (msec): 00:23:26.020 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 46], 00:23:26.020 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 66], 00:23:26.020 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:23:26.020 | 99.00th=[ 110], 99.50th=[ 118], 99.90th=[ 120], 99.95th=[ 126], 00:23:26.020 | 99.99th=[ 127] 00:23:26.020 bw ( KiB/s): min= 784, max= 1176, per=4.19%, avg=1027.89, stdev=123.80, samples=19 00:23:26.020 iops : min= 196, max= 294, avg=256.95, stdev=30.99, samples=19 00:23:26.020 lat (msec) : 20=0.12%, 50=30.60%, 100=66.40%, 250=2.88% 00:23:26.020 cpu : usr=42.27%, sys=1.72%, ctx=1251, majf=0, minf=9 00:23:26.020 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:26.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.020 filename2: (groupid=0, jobs=1): err= 0: pid=83997: Mon Dec 9 04:13:06 2024 00:23:26.020 read: IOPS=254, BW=1017KiB/s (1042kB/s)(9.99MiB/10057msec) 00:23:26.020 slat (usec): min=5, max=8029, avg=24.38, stdev=223.90 00:23:26.020 clat (msec): min=8, max=119, avg=62.70, stdev=18.80 00:23:26.020 lat (msec): min=8, max=119, avg=62.72, stdev=18.80 00:23:26.020 clat percentiles (msec): 00:23:26.020 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 40], 20.00th=[ 48], 00:23:26.020 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 69], 00:23:26.020 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 96], 00:23:26.020 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 114], 99.95th=[ 120], 00:23:26.020 | 99.99th=[ 120] 00:23:26.020 bw ( KiB/s): min= 728, max= 1777, per=4.15%, avg=1018.45, stdev=205.83, samples=20 00:23:26.020 iops : min= 182, max= 444, avg=254.60, stdev=51.41, samples=20 00:23:26.020 lat (msec) : 10=1.09%, 20=2.66%, 50=19.70%, 100=74.16%, 250=2.38% 00:23:26.020 cpu : usr=34.98%, sys=1.28%, ctx=974, majf=0, minf=9 00:23:26.020 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=80.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:23:26.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 complete : 0=0.0%, 4=88.5%, 8=11.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 issued rwts: total=2558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.020 filename2: (groupid=0, jobs=1): err= 0: pid=83998: Mon Dec 9 04:13:06 2024 00:23:26.020 read: IOPS=251, BW=1007KiB/s (1031kB/s)(9.89MiB/10053msec) 00:23:26.020 slat (usec): min=6, max=8033, avg=39.34, stdev=389.46 00:23:26.020 clat (msec): min=12, max=119, avg=63.37, stdev=17.15 00:23:26.020 lat (msec): min=12, max=119, avg=63.41, stdev=17.16 00:23:26.020 clat percentiles (msec): 00:23:26.020 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:23:26.020 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 69], 00:23:26.020 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 96], 00:23:26.020 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 118], 00:23:26.020 | 99.99th=[ 121] 00:23:26.020 bw ( KiB/s): min= 736, max= 1472, per=4.11%, avg=1007.15, stdev=148.58, samples=20 00:23:26.020 iops : min= 184, max= 368, avg=251.70, stdev=37.12, samples=20 00:23:26.020 lat (msec) : 20=0.63%, 50=23.63%, 100=73.45%, 250=2.29% 00:23:26.020 cpu : usr=32.14%, sys=1.09%, ctx=862, majf=0, minf=9 00:23:26.020 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.5%, 16=16.8%, 32=0.0%, >=64=0.0% 00:23:26.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 complete : 0=0.0%, 4=88.1%, 8=11.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.020 issued rwts: total=2531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:26.020 00:23:26.020 Run status group 0 (all jobs): 00:23:26.020 READ: bw=24.0MiB/s (25.1MB/s), 961KiB/s-1067KiB/s (984kB/s-1093kB/s), io=241MiB (253MB), run=10002-10076msec 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.020 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 bdev_null0 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 [2024-12-09 04:13:06.581405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 bdev_null1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.021 { 00:23:26.021 "params": { 00:23:26.021 "name": "Nvme$subsystem", 00:23:26.021 "trtype": "$TEST_TRANSPORT", 00:23:26.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.021 "adrfam": "ipv4", 00:23:26.021 "trsvcid": "$NVMF_PORT", 00:23:26.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.021 "hdgst": ${hdgst:-false}, 00:23:26.021 "ddgst": ${ddgst:-false} 00:23:26.021 }, 00:23:26.021 "method": "bdev_nvme_attach_controller" 00:23:26.021 } 00:23:26.021 EOF 00:23:26.021 )") 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.021 { 00:23:26.021 "params": { 00:23:26.021 "name": "Nvme$subsystem", 00:23:26.021 "trtype": "$TEST_TRANSPORT", 00:23:26.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.021 "adrfam": "ipv4", 00:23:26.021 "trsvcid": "$NVMF_PORT", 00:23:26.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.021 "hdgst": ${hdgst:-false}, 00:23:26.021 "ddgst": ${ddgst:-false} 00:23:26.021 }, 00:23:26.021 "method": "bdev_nvme_attach_controller" 00:23:26.021 } 00:23:26.021 EOF 00:23:26.021 )") 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:26.021 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:26.022 "params": { 00:23:26.022 "name": "Nvme0", 00:23:26.022 "trtype": "tcp", 00:23:26.022 "traddr": "10.0.0.3", 00:23:26.022 "adrfam": "ipv4", 00:23:26.022 "trsvcid": "4420", 00:23:26.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.022 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:26.022 "hdgst": false, 00:23:26.022 "ddgst": false 00:23:26.022 }, 00:23:26.022 "method": "bdev_nvme_attach_controller" 00:23:26.022 },{ 00:23:26.022 "params": { 00:23:26.022 "name": "Nvme1", 00:23:26.022 "trtype": "tcp", 00:23:26.022 "traddr": "10.0.0.3", 00:23:26.022 "adrfam": "ipv4", 00:23:26.022 "trsvcid": "4420", 00:23:26.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.022 "hdgst": false, 00:23:26.022 "ddgst": false 00:23:26.022 }, 00:23:26.022 "method": "bdev_nvme_attach_controller" 00:23:26.022 }' 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:26.022 04:13:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:26.022 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:26.022 ... 00:23:26.022 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:26.022 ... 00:23:26.022 fio-3.35 00:23:26.022 Starting 4 threads 00:23:31.289 00:23:31.289 filename0: (groupid=0, jobs=1): err= 0: pid=84140: Mon Dec 9 04:13:12 2024 00:23:31.289 read: IOPS=2364, BW=18.5MiB/s (19.4MB/s)(92.4MiB/5002msec) 00:23:31.289 slat (usec): min=3, max=110, avg=16.06, stdev= 7.68 00:23:31.289 clat (usec): min=717, max=22050, avg=3341.46, stdev=1515.32 00:23:31.289 lat (usec): min=727, max=22110, avg=3357.53, stdev=1515.43 00:23:31.289 clat percentiles (usec): 00:23:31.289 | 1.00th=[ 1029], 5.00th=[ 1893], 10.00th=[ 2057], 20.00th=[ 2212], 00:23:31.289 | 30.00th=[ 2343], 40.00th=[ 2802], 50.00th=[ 3261], 60.00th=[ 3818], 00:23:31.289 | 70.00th=[ 3949], 80.00th=[ 4080], 90.00th=[ 4359], 95.00th=[ 5145], 00:23:31.289 | 99.00th=[ 7046], 99.50th=[13042], 99.90th=[20055], 99.95th=[20579], 00:23:31.289 | 99.99th=[21890] 00:23:31.289 bw ( KiB/s): min=10240, max=21264, per=25.49%, avg=18639.89, stdev=3639.20, samples=9 00:23:31.289 iops : min= 1280, max= 2658, avg=2329.89, stdev=454.99, samples=9 00:23:31.289 lat (usec) : 750=0.03%, 1000=0.77% 00:23:31.289 lat (msec) : 2=6.44%, 4=66.83%, 10=25.08%, 20=0.75%, 50=0.09% 00:23:31.289 cpu : usr=92.98%, sys=6.06%, ctx=17, majf=0, minf=9 00:23:31.289 IO depths : 1=0.1%, 2=4.3%, 4=62.3%, 8=33.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.289 complete : 0=0.0%, 4=98.3%, 8=1.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.289 issued rwts: total=11825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:31.289 filename0: (groupid=0, jobs=1): err= 0: pid=84141: Mon Dec 9 04:13:12 2024 00:23:31.289 read: IOPS=2335, BW=18.2MiB/s (19.1MB/s)(91.3MiB/5002msec) 00:23:31.289 slat (nsec): min=3918, max=93834, avg=15007.38, stdev=8217.48 00:23:31.289 clat (usec): min=844, max=22217, avg=3382.57, stdev=1464.64 00:23:31.289 lat (usec): min=852, max=22224, avg=3397.58, stdev=1465.06 00:23:31.289 clat percentiles (usec): 00:23:31.289 | 1.00th=[ 1778], 5.00th=[ 2024], 10.00th=[ 2114], 20.00th=[ 2245], 00:23:31.289 | 30.00th=[ 2376], 40.00th=[ 2835], 50.00th=[ 3458], 60.00th=[ 3851], 00:23:31.289 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4293], 95.00th=[ 4817], 00:23:31.289 | 99.00th=[ 7046], 99.50th=[13042], 99.90th=[20055], 99.95th=[20579], 00:23:31.289 | 99.99th=[21890] 00:23:31.289 bw ( KiB/s): min=10240, max=20944, per=25.15%, avg=18394.67, stdev=3245.56, samples=9 00:23:31.289 iops : min= 1280, max= 2618, avg=2299.33, stdev=405.69, samples=9 00:23:31.289 lat (usec) : 1000=0.02% 00:23:31.289 lat (msec) : 2=4.13%, 4=68.72%, 10=26.28%, 20=0.76%, 50=0.09% 00:23:31.289 cpu : usr=93.04%, sys=6.06%, ctx=4, majf=0, minf=10 00:23:31.289 IO depths : 1=0.3%, 2=5.1%, 4=62.2%, 8=32.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.289 complete : 0=0.0%, 4=98.1%, 8=1.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.289 issued rwts: total=11683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:31.289 filename1: (groupid=0, jobs=1): err= 0: pid=84142: Mon Dec 9 04:13:12 2024 00:23:31.289 read: IOPS=2325, BW=18.2MiB/s (19.1MB/s)(90.9MiB/5001msec) 00:23:31.289 slat (usec): min=5, max=110, avg=16.77, stdev= 7.94 00:23:31.289 clat (usec): min=730, max=22051, avg=3393.49, stdev=1459.52 00:23:31.289 lat (usec): min=737, max=22114, avg=3410.26, stdev=1459.70 00:23:31.289 clat percentiles (usec): 00:23:31.289 | 1.00th=[ 1778], 5.00th=[ 2040], 10.00th=[ 2147], 20.00th=[ 2245], 00:23:31.289 | 30.00th=[ 2409], 40.00th=[ 2835], 50.00th=[ 3523], 60.00th=[ 3851], 00:23:31.289 | 70.00th=[ 3949], 80.00th=[ 4047], 90.00th=[ 4293], 95.00th=[ 4817], 00:23:31.289 | 99.00th=[ 7046], 99.50th=[13042], 99.90th=[20055], 99.95th=[20579], 00:23:31.289 | 99.99th=[21890] 00:23:31.289 bw ( KiB/s): min=10240, max=20944, per=25.01%, avg=18289.78, stdev=3221.74, samples=9 00:23:31.289 iops : min= 1280, max= 2618, avg=2286.22, stdev=402.72, samples=9 00:23:31.289 lat (usec) : 750=0.03%, 1000=0.05% 00:23:31.289 lat (msec) : 2=3.68%, 4=70.84%, 10=24.54%, 20=0.77%, 50=0.09% 00:23:31.289 cpu : usr=93.78%, sys=5.26%, ctx=52, majf=0, minf=0 00:23:31.289 IO depths : 1=0.3%, 2=5.5%, 4=62.0%, 8=32.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.289 complete : 0=0.0%, 4=97.9%, 8=2.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.289 issued rwts: total=11630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:31.289 filename1: (groupid=0, jobs=1): err= 0: pid=84143: Mon Dec 9 04:13:12 2024 00:23:31.289 read: IOPS=2116, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5002msec) 00:23:31.289 slat (nsec): min=5171, max=89656, avg=14620.27, stdev=9037.88 00:23:31.289 clat (usec): min=602, max=22045, avg=3729.46, stdev=1472.38 00:23:31.289 lat (usec): min=610, max=22076, avg=3744.08, stdev=1473.22 00:23:31.289 clat percentiles (usec): 00:23:31.289 | 1.00th=[ 1057], 5.00th=[ 1762], 10.00th=[ 2212], 20.00th=[ 2868], 00:23:31.289 | 30.00th=[ 3228], 40.00th=[ 3752], 50.00th=[ 4015], 60.00th=[ 4080], 00:23:31.289 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4752], 00:23:31.289 | 99.00th=[ 8029], 99.50th=[13304], 99.90th=[20055], 99.95th=[20579], 00:23:31.289 | 99.99th=[21627] 00:23:31.289 bw ( KiB/s): min=12384, max=20384, per=23.38%, avg=17096.89, stdev=2738.88, samples=9 00:23:31.289 iops : min= 1548, max= 2548, avg=2137.11, stdev=342.36, samples=9 00:23:31.289 lat (usec) : 750=0.03%, 1000=0.64% 00:23:31.289 lat (msec) : 2=7.32%, 4=41.58%, 10=49.49%, 20=0.84%, 50=0.10% 00:23:31.289 cpu : usr=93.92%, sys=5.14%, ctx=19, majf=0, minf=0 00:23:31.289 IO depths : 1=0.3%, 2=13.5%, 4=57.8%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.289 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.289 issued rwts: total=10585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:31.289 00:23:31.289 Run status group 0 (all jobs): 00:23:31.289 READ: bw=71.4MiB/s (74.9MB/s), 16.5MiB/s-18.5MiB/s (17.3MB/s-19.4MB/s), io=357MiB (375MB), run=5001-5002msec 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.289 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.290 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.290 04:13:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:31.290 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.290 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.290 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.290 00:23:31.290 real 0m23.790s 00:23:31.290 user 2m6.171s 00:23:31.290 sys 0m6.255s 00:23:31.290 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.290 04:13:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:31.290 ************************************ 00:23:31.290 END TEST fio_dif_rand_params 00:23:31.290 ************************************ 00:23:31.290 04:13:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:31.290 04:13:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:31.290 04:13:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.290 04:13:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:31.290 ************************************ 00:23:31.290 START TEST fio_dif_digest 00:23:31.290 ************************************ 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.290 bdev_null0 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:31.290 [2024-12-09 04:13:12.799241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:31.290 { 00:23:31.290 "params": { 00:23:31.290 "name": "Nvme$subsystem", 00:23:31.290 "trtype": "$TEST_TRANSPORT", 00:23:31.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.290 "adrfam": "ipv4", 00:23:31.290 "trsvcid": "$NVMF_PORT", 00:23:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.290 "hdgst": ${hdgst:-false}, 00:23:31.290 "ddgst": ${ddgst:-false} 00:23:31.290 }, 00:23:31.290 "method": "bdev_nvme_attach_controller" 00:23:31.290 } 00:23:31.290 EOF 00:23:31.290 )") 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:31.290 "params": { 00:23:31.290 "name": "Nvme0", 00:23:31.290 "trtype": "tcp", 00:23:31.290 "traddr": "10.0.0.3", 00:23:31.290 "adrfam": "ipv4", 00:23:31.290 "trsvcid": "4420", 00:23:31.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.290 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:31.290 "hdgst": true, 00:23:31.290 "ddgst": true 00:23:31.290 }, 00:23:31.290 "method": "bdev_nvme_attach_controller" 00:23:31.290 }' 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:31.290 04:13:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.290 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:31.290 ... 00:23:31.290 fio-3.35 00:23:31.290 Starting 3 threads 00:23:43.519 00:23:43.519 filename0: (groupid=0, jobs=1): err= 0: pid=84249: Mon Dec 9 04:13:23 2024 00:23:43.519 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(335MiB/10007msec) 00:23:43.519 slat (nsec): min=6463, max=70374, avg=11689.02, stdev=6556.32 00:23:43.519 clat (usec): min=7441, max=13181, avg=11163.75, stdev=303.53 00:23:43.519 lat (usec): min=7479, max=13196, avg=11175.44, stdev=304.85 00:23:43.519 clat percentiles (usec): 00:23:43.519 | 1.00th=[10945], 5.00th=[10945], 10.00th=[10945], 20.00th=[10945], 00:23:43.519 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11076], 00:23:43.519 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:23:43.519 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13173], 99.95th=[13173], 00:23:43.519 | 99.99th=[13173] 00:23:43.519 bw ( KiB/s): min=32256, max=34560, per=33.33%, avg=34291.20, stdev=624.17, samples=20 00:23:43.519 iops : min= 252, max= 270, avg=267.90, stdev= 4.88, samples=20 00:23:43.519 lat (msec) : 10=0.22%, 20=99.78% 00:23:43.519 cpu : usr=93.08%, sys=6.21%, ctx=75, majf=0, minf=0 00:23:43.519 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.519 issued rwts: total=2682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.519 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:43.519 filename0: (groupid=0, jobs=1): err= 0: pid=84250: Mon Dec 9 04:13:23 2024 00:23:43.519 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(335MiB/10003msec) 00:23:43.519 slat (nsec): min=6196, max=65165, avg=11410.21, stdev=6251.88 00:23:43.519 clat (usec): min=10833, max=14139, avg=11171.81, stdev=288.12 00:23:43.519 lat (usec): min=10850, max=14172, avg=11183.22, stdev=288.61 00:23:43.519 clat percentiles (usec): 00:23:43.519 | 1.00th=[10945], 5.00th=[10945], 10.00th=[10945], 20.00th=[10945], 00:23:43.519 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11076], 00:23:43.519 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:23:43.519 | 99.00th=[12125], 99.50th=[12256], 99.90th=[14091], 99.95th=[14091], 00:23:43.519 | 99.99th=[14091] 00:23:43.519 bw ( KiB/s): min=32256, max=34560, per=33.32%, avg=34277.05, stdev=637.98, samples=19 00:23:43.519 iops : min= 252, max= 270, avg=267.79, stdev= 4.98, samples=19 00:23:43.519 lat (msec) : 20=100.00% 00:23:43.519 cpu : usr=94.10%, sys=5.40%, ctx=93, majf=0, minf=0 00:23:43.519 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.519 issued rwts: total=2679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.519 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:43.519 filename0: (groupid=0, jobs=1): err= 0: pid=84251: Mon Dec 9 04:13:23 2024 00:23:43.519 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(335MiB/10007msec) 00:23:43.519 slat (nsec): min=6353, max=54347, avg=13067.49, stdev=7260.11 00:23:43.519 clat (usec): min=7392, max=12838, avg=11160.19, stdev=320.63 00:23:43.519 lat (usec): min=7399, max=12857, avg=11173.26, stdev=321.61 00:23:43.519 clat percentiles (usec): 00:23:43.519 | 1.00th=[10945], 5.00th=[10945], 10.00th=[10945], 20.00th=[10945], 00:23:43.519 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11076], 00:23:43.519 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:23:43.519 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12780], 99.95th=[12780], 00:23:43.519 | 99.99th=[12780] 00:23:43.519 bw ( KiB/s): min=32256, max=34560, per=33.33%, avg=34291.20, stdev=624.17, samples=20 00:23:43.519 iops : min= 252, max= 270, avg=267.90, stdev= 4.88, samples=20 00:23:43.519 lat (msec) : 10=0.22%, 20=99.78% 00:23:43.519 cpu : usr=96.19%, sys=3.28%, ctx=123, majf=0, minf=0 00:23:43.519 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.519 issued rwts: total=2682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.519 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:43.519 00:23:43.519 Run status group 0 (all jobs): 00:23:43.519 READ: bw=100MiB/s (105MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=1005MiB (1054MB), run=10003-10007msec 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.519 00:23:43.519 real 0m11.031s 00:23:43.519 user 0m29.014s 00:23:43.519 sys 0m1.769s 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.519 04:13:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:43.519 ************************************ 00:23:43.519 END TEST fio_dif_digest 00:23:43.519 ************************************ 00:23:43.519 04:13:23 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:43.519 04:13:23 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.519 rmmod nvme_tcp 00:23:43.519 rmmod nvme_fabrics 00:23:43.519 rmmod nvme_keyring 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83495 ']' 00:23:43.519 04:13:23 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83495 00:23:43.519 04:13:23 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83495 ']' 00:23:43.519 04:13:23 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83495 00:23:43.519 04:13:23 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:23:43.519 04:13:23 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.520 04:13:23 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83495 00:23:43.520 04:13:23 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.520 killing process with pid 83495 00:23:43.520 04:13:23 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.520 04:13:23 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83495' 00:23:43.520 04:13:23 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83495 00:23:43.520 04:13:23 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83495 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:43.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:43.520 Waiting for block devices as requested 00:23:43.520 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:43.520 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.520 04:13:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:43.520 04:13:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.520 04:13:24 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:23:43.520 00:23:43.520 real 1m0.093s 00:23:43.520 user 3m50.499s 00:23:43.520 sys 0m17.804s 00:23:43.520 04:13:24 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.520 04:13:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:43.520 ************************************ 00:23:43.520 END TEST nvmf_dif 00:23:43.520 ************************************ 00:23:43.520 04:13:25 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:43.520 04:13:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:43.520 04:13:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.520 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:23:43.520 ************************************ 00:23:43.520 START TEST nvmf_abort_qd_sizes 00:23:43.520 ************************************ 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:43.520 * Looking for test storage... 00:23:43.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:43.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.520 --rc genhtml_branch_coverage=1 00:23:43.520 --rc genhtml_function_coverage=1 00:23:43.520 --rc genhtml_legend=1 00:23:43.520 --rc geninfo_all_blocks=1 00:23:43.520 --rc geninfo_unexecuted_blocks=1 00:23:43.520 00:23:43.520 ' 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:43.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.520 --rc genhtml_branch_coverage=1 00:23:43.520 --rc genhtml_function_coverage=1 00:23:43.520 --rc genhtml_legend=1 00:23:43.520 --rc geninfo_all_blocks=1 00:23:43.520 --rc geninfo_unexecuted_blocks=1 00:23:43.520 00:23:43.520 ' 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:43.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.520 --rc genhtml_branch_coverage=1 00:23:43.520 --rc genhtml_function_coverage=1 00:23:43.520 --rc genhtml_legend=1 00:23:43.520 --rc geninfo_all_blocks=1 00:23:43.520 --rc geninfo_unexecuted_blocks=1 00:23:43.520 00:23:43.520 ' 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:43.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.520 --rc genhtml_branch_coverage=1 00:23:43.520 --rc genhtml_function_coverage=1 00:23:43.520 --rc genhtml_legend=1 00:23:43.520 --rc geninfo_all_blocks=1 00:23:43.520 --rc geninfo_unexecuted_blocks=1 00:23:43.520 00:23:43.520 ' 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.520 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.521 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:43.521 Cannot find device "nvmf_init_br" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:43.521 Cannot find device "nvmf_init_br2" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:43.521 Cannot find device "nvmf_tgt_br" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.521 Cannot find device "nvmf_tgt_br2" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:43.521 Cannot find device "nvmf_init_br" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:43.521 Cannot find device "nvmf_init_br2" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:43.521 Cannot find device "nvmf_tgt_br" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:43.521 Cannot find device "nvmf_tgt_br2" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:43.521 Cannot find device "nvmf_br" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:43.521 Cannot find device "nvmf_init_if" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:43.521 Cannot find device "nvmf_init_if2" 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.521 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.521 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:43.521 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:43.780 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.780 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:23:43.780 00:23:43.780 --- 10.0.0.3 ping statistics --- 00:23:43.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.780 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:43.780 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:43.780 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:23:43.780 00:23:43.780 --- 10.0.0.4 ping statistics --- 00:23:43.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.780 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:23:43.780 00:23:43.780 --- 10.0.0.1 ping statistics --- 00:23:43.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.780 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:43.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:43.780 00:23:43.780 --- 10.0.0.2 ping statistics --- 00:23:43.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.780 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:43.780 04:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:44.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:44.714 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:44.714 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84900 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84900 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84900 ']' 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.714 04:13:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:44.973 [2024-12-09 04:13:26.664350] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:23:44.973 [2024-12-09 04:13:26.664433] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.973 [2024-12-09 04:13:26.819310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.973 [2024-12-09 04:13:26.879597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.973 [2024-12-09 04:13:26.879653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.973 [2024-12-09 04:13:26.879668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.973 [2024-12-09 04:13:26.879679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.973 [2024-12-09 04:13:26.879688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.973 [2024-12-09 04:13:26.880970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.973 [2024-12-09 04:13:26.881078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.973 [2024-12-09 04:13:26.881199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.973 [2024-12-09 04:13:26.881205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.232 [2024-12-09 04:13:26.946154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:45.232 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.233 04:13:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:45.233 ************************************ 00:23:45.233 START TEST spdk_target_abort 00:23:45.233 ************************************ 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.233 spdk_targetn1 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.233 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.491 [2024-12-09 04:13:27.183737] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:45.491 [2024-12-09 04:13:27.224733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:45.491 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:45.492 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:45.492 04:13:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:48.775 Initializing NVMe Controllers 00:23:48.775 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:48.775 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:48.775 Initialization complete. Launching workers. 00:23:48.775 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9090, failed: 0 00:23:48.775 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1034, failed to submit 8056 00:23:48.775 success 756, unsuccessful 278, failed 0 00:23:48.775 04:13:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:48.775 04:13:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:52.097 Initializing NVMe Controllers 00:23:52.097 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:52.097 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:52.097 Initialization complete. Launching workers. 00:23:52.097 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9014, failed: 0 00:23:52.097 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1155, failed to submit 7859 00:23:52.097 success 387, unsuccessful 768, failed 0 00:23:52.097 04:13:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:52.097 04:13:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:55.382 Initializing NVMe Controllers 00:23:55.382 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:55.382 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:55.382 Initialization complete. Launching workers. 00:23:55.383 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30273, failed: 0 00:23:55.383 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2307, failed to submit 27966 00:23:55.383 success 464, unsuccessful 1843, failed 0 00:23:55.383 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:55.383 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.383 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:55.383 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.383 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:55.383 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.383 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84900 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84900 ']' 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84900 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84900 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:55.640 killing process with pid 84900 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84900' 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84900 00:23:55.640 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84900 00:23:55.897 00:23:55.897 real 0m10.666s 00:23:55.897 user 0m40.928s 00:23:55.897 sys 0m1.829s 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:55.897 ************************************ 00:23:55.897 END TEST spdk_target_abort 00:23:55.897 ************************************ 00:23:55.897 04:13:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:55.897 04:13:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:55.897 04:13:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.897 04:13:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:55.897 ************************************ 00:23:55.897 START TEST kernel_target_abort 00:23:55.897 ************************************ 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:55.897 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:56.154 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:56.154 04:13:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:56.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:56.411 Waiting for block devices as requested 00:23:56.411 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:56.668 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:56.668 No valid GPT data, bailing 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:56.668 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:56.925 No valid GPT data, bailing 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:56.925 No valid GPT data, bailing 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:56.925 No valid GPT data, bailing 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:56.925 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc --hostid=9ed3da8d-b493-400f-8e42-fb307dd7edcc -a 10.0.0.1 -t tcp -s 4420 00:23:56.925 00:23:56.925 Discovery Log Number of Records 2, Generation counter 2 00:23:56.925 =====Discovery Log Entry 0====== 00:23:56.925 trtype: tcp 00:23:56.925 adrfam: ipv4 00:23:56.925 subtype: current discovery subsystem 00:23:56.925 treq: not specified, sq flow control disable supported 00:23:56.925 portid: 1 00:23:56.925 trsvcid: 4420 00:23:56.925 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:56.925 traddr: 10.0.0.1 00:23:56.925 eflags: none 00:23:56.925 sectype: none 00:23:56.925 =====Discovery Log Entry 1====== 00:23:56.925 trtype: tcp 00:23:56.925 adrfam: ipv4 00:23:56.925 subtype: nvme subsystem 00:23:56.925 treq: not specified, sq flow control disable supported 00:23:56.925 portid: 1 00:23:56.925 trsvcid: 4420 00:23:56.925 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:56.925 traddr: 10.0.0.1 00:23:56.925 eflags: none 00:23:56.925 sectype: none 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:57.183 04:13:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:00.465 Initializing NVMe Controllers 00:24:00.465 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:00.465 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:00.465 Initialization complete. Launching workers. 00:24:00.465 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28792, failed: 0 00:24:00.465 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28792, failed to submit 0 00:24:00.465 success 0, unsuccessful 28792, failed 0 00:24:00.465 04:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:00.465 04:13:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:03.749 Initializing NVMe Controllers 00:24:03.749 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:03.749 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:03.749 Initialization complete. Launching workers. 00:24:03.749 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63748, failed: 0 00:24:03.749 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25691, failed to submit 38057 00:24:03.749 success 0, unsuccessful 25691, failed 0 00:24:03.749 04:13:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:03.749 04:13:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:07.033 Initializing NVMe Controllers 00:24:07.033 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:07.033 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:07.033 Initialization complete. Launching workers. 00:24:07.033 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71481, failed: 0 00:24:07.033 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17846, failed to submit 53635 00:24:07.033 success 0, unsuccessful 17846, failed 0 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:07.033 04:13:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:07.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:09.194 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:09.194 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:09.194 00:24:09.194 real 0m12.996s 00:24:09.194 user 0m5.728s 00:24:09.194 sys 0m4.454s 00:24:09.194 04:13:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.194 04:13:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:09.194 ************************************ 00:24:09.194 END TEST kernel_target_abort 00:24:09.194 ************************************ 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.194 rmmod nvme_tcp 00:24:09.194 rmmod nvme_fabrics 00:24:09.194 rmmod nvme_keyring 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84900 ']' 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84900 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84900 ']' 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84900 00:24:09.194 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84900) - No such process 00:24:09.194 Process with pid 84900 is not found 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84900 is not found' 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:09.194 04:13:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:09.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:09.453 Waiting for block devices as requested 00:24:09.711 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:09.711 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:09.711 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.969 04:13:51 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:09.969 00:24:09.969 real 0m26.829s 00:24:09.969 user 0m47.842s 00:24:09.970 sys 0m7.811s 00:24:09.970 04:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.970 04:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:09.970 ************************************ 00:24:09.970 END TEST nvmf_abort_qd_sizes 00:24:09.970 ************************************ 00:24:10.228 04:13:51 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:10.228 04:13:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:10.228 04:13:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.228 04:13:51 -- common/autotest_common.sh@10 -- # set +x 00:24:10.228 ************************************ 00:24:10.228 START TEST keyring_file 00:24:10.228 ************************************ 00:24:10.228 04:13:51 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:10.228 * Looking for test storage... 00:24:10.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:10.228 04:13:52 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.228 04:13:52 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.228 04:13:52 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.228 04:13:52 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.228 04:13:52 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.228 04:13:52 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:10.229 04:13:52 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.229 04:13:52 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.229 --rc genhtml_branch_coverage=1 00:24:10.229 --rc genhtml_function_coverage=1 00:24:10.229 --rc genhtml_legend=1 00:24:10.229 --rc geninfo_all_blocks=1 00:24:10.229 --rc geninfo_unexecuted_blocks=1 00:24:10.229 00:24:10.229 ' 00:24:10.229 04:13:52 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.229 --rc genhtml_branch_coverage=1 00:24:10.229 --rc genhtml_function_coverage=1 00:24:10.229 --rc genhtml_legend=1 00:24:10.229 --rc geninfo_all_blocks=1 00:24:10.229 --rc geninfo_unexecuted_blocks=1 00:24:10.229 00:24:10.229 ' 00:24:10.229 04:13:52 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.229 --rc genhtml_branch_coverage=1 00:24:10.229 --rc genhtml_function_coverage=1 00:24:10.229 --rc genhtml_legend=1 00:24:10.229 --rc geninfo_all_blocks=1 00:24:10.229 --rc geninfo_unexecuted_blocks=1 00:24:10.229 00:24:10.229 ' 00:24:10.229 04:13:52 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.229 --rc genhtml_branch_coverage=1 00:24:10.229 --rc genhtml_function_coverage=1 00:24:10.229 --rc genhtml_legend=1 00:24:10.229 --rc geninfo_all_blocks=1 00:24:10.229 --rc geninfo_unexecuted_blocks=1 00:24:10.229 00:24:10.229 ' 00:24:10.229 04:13:52 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.229 04:13:52 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.229 04:13:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.229 04:13:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.229 04:13:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.229 04:13:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:10.229 04:13:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:10.229 04:13:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:10.229 04:13:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:10.229 04:13:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:10.229 04:13:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:10.229 04:13:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:10.229 04:13:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RiyB7zwoZy 00:24:10.229 04:13:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:10.229 04:13:52 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RiyB7zwoZy 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RiyB7zwoZy 00:24:10.488 04:13:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.RiyB7zwoZy 00:24:10.488 04:13:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9zsROY6OuC 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:10.488 04:13:52 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:10.488 04:13:52 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:10.488 04:13:52 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:10.488 04:13:52 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:10.488 04:13:52 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:10.488 04:13:52 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9zsROY6OuC 00:24:10.488 04:13:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9zsROY6OuC 00:24:10.488 04:13:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9zsROY6OuC 00:24:10.488 04:13:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=85804 00:24:10.488 04:13:52 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:10.488 04:13:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85804 00:24:10.488 04:13:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85804 ']' 00:24:10.488 04:13:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.488 04:13:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.488 04:13:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.488 04:13:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.488 04:13:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:10.488 [2024-12-09 04:13:52.362581] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:24:10.488 [2024-12-09 04:13:52.362702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85804 ] 00:24:10.746 [2024-12-09 04:13:52.513561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.746 [2024-12-09 04:13:52.581053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.746 [2024-12-09 04:13:52.690588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:11.312 04:13:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.312 04:13:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:11.312 04:13:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:11.312 04:13:52 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.312 04:13:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:11.312 [2024-12-09 04:13:52.976626] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.312 null0 00:24:11.313 [2024-12-09 04:13:53.008596] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.313 [2024-12-09 04:13:53.008962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.313 04:13:53 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:11.313 [2024-12-09 04:13:53.036586] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:11.313 request: 00:24:11.313 { 00:24:11.313 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.313 "secure_channel": false, 00:24:11.313 "listen_address": { 00:24:11.313 "trtype": "tcp", 00:24:11.313 "traddr": "127.0.0.1", 00:24:11.313 "trsvcid": "4420" 00:24:11.313 }, 00:24:11.313 "method": "nvmf_subsystem_add_listener", 00:24:11.313 "req_id": 1 00:24:11.313 } 00:24:11.313 Got JSON-RPC error response 00:24:11.313 response: 00:24:11.313 { 00:24:11.313 "code": -32602, 00:24:11.313 "message": "Invalid parameters" 00:24:11.313 } 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:11.313 04:13:53 keyring_file -- keyring/file.sh@47 -- # bperfpid=85814 00:24:11.313 04:13:53 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:11.313 04:13:53 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85814 /var/tmp/bperf.sock 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85814 ']' 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:11.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.313 04:13:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:11.313 [2024-12-09 04:13:53.104677] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:24:11.313 [2024-12-09 04:13:53.104930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85814 ] 00:24:11.313 [2024-12-09 04:13:53.256855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.570 [2024-12-09 04:13:53.321753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.570 [2024-12-09 04:13:53.400329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:11.570 04:13:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.570 04:13:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:11.570 04:13:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RiyB7zwoZy 00:24:11.570 04:13:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RiyB7zwoZy 00:24:12.135 04:13:53 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9zsROY6OuC 00:24:12.135 04:13:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9zsROY6OuC 00:24:12.135 04:13:54 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:12.135 04:13:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:12.135 04:13:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:12.135 04:13:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:12.135 04:13:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.393 04:13:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.RiyB7zwoZy == \/\t\m\p\/\t\m\p\.\R\i\y\B\7\z\w\o\Z\y ]] 00:24:12.393 04:13:54 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:12.393 04:13:54 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:12.393 04:13:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:12.393 04:13:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:12.393 04:13:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.651 04:13:54 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.9zsROY6OuC == \/\t\m\p\/\t\m\p\.\9\z\s\R\O\Y\6\O\u\C ]] 00:24:12.651 04:13:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:12.651 04:13:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:12.651 04:13:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:12.652 04:13:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:12.652 04:13:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:12.652 04:13:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.910 04:13:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:12.910 04:13:54 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:12.910 04:13:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:12.910 04:13:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:12.910 04:13:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:12.910 04:13:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.910 04:13:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:13.168 04:13:54 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:13.168 04:13:54 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.168 04:13:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.456 [2024-12-09 04:13:55.185174] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.456 nvme0n1 00:24:13.456 04:13:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:13.456 04:13:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:13.457 04:13:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:13.457 04:13:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:13.457 04:13:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:13.457 04:13:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:13.715 04:13:55 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:24:13.715 04:13:55 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:24:13.715 04:13:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:13.715 04:13:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:13.715 04:13:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:13.715 04:13:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:13.715 04:13:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:13.973 04:13:55 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:24:13.973 04:13:55 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.973 Running I/O for 1 seconds... 00:24:15.345 13338.00 IOPS, 52.10 MiB/s 00:24:15.345 Latency(us) 00:24:15.345 [2024-12-09T04:13:57.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.345 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:15.345 nvme0n1 : 1.01 13386.39 52.29 0.00 0.00 9536.52 4408.79 21090.68 00:24:15.345 [2024-12-09T04:13:57.295Z] =================================================================================================================== 00:24:15.345 [2024-12-09T04:13:57.295Z] Total : 13386.39 52.29 0.00 0.00 9536.52 4408.79 21090.68 00:24:15.345 { 00:24:15.345 "results": [ 00:24:15.345 { 00:24:15.345 "job": "nvme0n1", 00:24:15.345 "core_mask": "0x2", 00:24:15.345 "workload": "randrw", 00:24:15.345 "percentage": 50, 00:24:15.345 "status": "finished", 00:24:15.345 "queue_depth": 128, 00:24:15.345 "io_size": 4096, 00:24:15.345 "runtime": 1.006022, 00:24:15.345 "iops": 13386.387176423577, 00:24:15.345 "mibps": 52.2905749079046, 00:24:15.345 "io_failed": 0, 00:24:15.345 "io_timeout": 0, 00:24:15.345 "avg_latency_us": 9536.517246602807, 00:24:15.345 "min_latency_us": 4408.785454545455, 00:24:15.345 "max_latency_us": 21090.676363636365 00:24:15.345 } 00:24:15.345 ], 00:24:15.345 "core_count": 1 00:24:15.345 } 00:24:15.345 04:13:56 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:15.345 04:13:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:15.345 04:13:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:15.345 04:13:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:15.345 04:13:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:15.345 04:13:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.345 04:13:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:15.345 04:13:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.603 04:13:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:15.603 04:13:57 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:15.603 04:13:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:15.603 04:13:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:15.603 04:13:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:15.603 04:13:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.603 04:13:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.861 04:13:57 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:15.861 04:13:57 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:15.861 04:13:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:15.861 04:13:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:15.861 04:13:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:15.861 04:13:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.861 04:13:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:15.861 04:13:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.861 04:13:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:15.861 04:13:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:16.120 [2024-12-09 04:13:57.972339] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:16.120 [2024-12-09 04:13:57.972729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1920140 (107): Transport endpoint is not connected 00:24:16.120 [2024-12-09 04:13:57.973718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1920140 (9): Bad file descriptor 00:24:16.120 [2024-12-09 04:13:57.974716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:16.120 [2024-12-09 04:13:57.974742] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:16.120 [2024-12-09 04:13:57.974753] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:16.120 [2024-12-09 04:13:57.974762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:16.120 request: 00:24:16.120 { 00:24:16.120 "name": "nvme0", 00:24:16.120 "trtype": "tcp", 00:24:16.120 "traddr": "127.0.0.1", 00:24:16.120 "adrfam": "ipv4", 00:24:16.120 "trsvcid": "4420", 00:24:16.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.120 "prchk_reftag": false, 00:24:16.120 "prchk_guard": false, 00:24:16.120 "hdgst": false, 00:24:16.120 "ddgst": false, 00:24:16.120 "psk": "key1", 00:24:16.120 "allow_unrecognized_csi": false, 00:24:16.120 "method": "bdev_nvme_attach_controller", 00:24:16.120 "req_id": 1 00:24:16.120 } 00:24:16.120 Got JSON-RPC error response 00:24:16.120 response: 00:24:16.120 { 00:24:16.120 "code": -5, 00:24:16.120 "message": "Input/output error" 00:24:16.120 } 00:24:16.120 04:13:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:16.120 04:13:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.120 04:13:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:16.120 04:13:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.120 04:13:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:16.120 04:13:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:16.120 04:13:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:16.120 04:13:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:16.120 04:13:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:16.120 04:13:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:16.378 04:13:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:16.378 04:13:58 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:16.378 04:13:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:16.378 04:13:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:16.378 04:13:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:16.378 04:13:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:16.378 04:13:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:16.636 04:13:58 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:16.636 04:13:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:16.636 04:13:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:16.894 04:13:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:16.894 04:13:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:17.152 04:13:59 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:17.152 04:13:59 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:17.152 04:13:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.410 04:13:59 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:17.410 04:13:59 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.RiyB7zwoZy 00:24:17.410 04:13:59 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.RiyB7zwoZy 00:24:17.410 04:13:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:17.410 04:13:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.RiyB7zwoZy 00:24:17.410 04:13:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:17.410 04:13:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.410 04:13:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:17.410 04:13:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.410 04:13:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RiyB7zwoZy 00:24:17.410 04:13:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RiyB7zwoZy 00:24:17.669 [2024-12-09 04:13:59.476041] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.RiyB7zwoZy': 0100660 00:24:17.669 [2024-12-09 04:13:59.476083] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:17.669 request: 00:24:17.669 { 00:24:17.669 "name": "key0", 00:24:17.669 "path": "/tmp/tmp.RiyB7zwoZy", 00:24:17.669 "method": "keyring_file_add_key", 00:24:17.669 "req_id": 1 00:24:17.669 } 00:24:17.669 Got JSON-RPC error response 00:24:17.669 response: 00:24:17.669 { 00:24:17.669 "code": -1, 00:24:17.669 "message": "Operation not permitted" 00:24:17.669 } 00:24:17.669 04:13:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:17.669 04:13:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:17.669 04:13:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:17.669 04:13:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:17.669 04:13:59 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.RiyB7zwoZy 00:24:17.669 04:13:59 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RiyB7zwoZy 00:24:17.669 04:13:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RiyB7zwoZy 00:24:17.929 04:13:59 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.RiyB7zwoZy 00:24:17.929 04:13:59 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:24:17.929 04:13:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:17.929 04:13:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:17.929 04:13:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:17.929 04:13:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:17.929 04:13:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.188 04:13:59 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:24:18.188 04:13:59 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:18.188 04:13:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:18.188 04:13:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:18.188 04:13:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:18.188 04:13:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.188 04:13:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:18.188 04:13:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.188 04:13:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:18.188 04:13:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:18.188 [2024-12-09 04:14:00.124210] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.RiyB7zwoZy': No such file or directory 00:24:18.188 [2024-12-09 04:14:00.124249] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:18.188 [2024-12-09 04:14:00.124284] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:18.188 [2024-12-09 04:14:00.124292] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:24:18.188 [2024-12-09 04:14:00.124301] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:18.188 [2024-12-09 04:14:00.124309] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:18.188 request: 00:24:18.188 { 00:24:18.188 "name": "nvme0", 00:24:18.188 "trtype": "tcp", 00:24:18.188 "traddr": "127.0.0.1", 00:24:18.188 "adrfam": "ipv4", 00:24:18.188 "trsvcid": "4420", 00:24:18.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:18.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:18.188 "prchk_reftag": false, 00:24:18.188 "prchk_guard": false, 00:24:18.188 "hdgst": false, 00:24:18.188 "ddgst": false, 00:24:18.188 "psk": "key0", 00:24:18.188 "allow_unrecognized_csi": false, 00:24:18.188 "method": "bdev_nvme_attach_controller", 00:24:18.188 "req_id": 1 00:24:18.188 } 00:24:18.188 Got JSON-RPC error response 00:24:18.188 response: 00:24:18.188 { 00:24:18.188 "code": -19, 00:24:18.188 "message": "No such device" 00:24:18.188 } 00:24:18.447 04:14:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:18.447 04:14:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.447 04:14:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.447 04:14:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.447 04:14:00 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:18.447 04:14:00 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Y31fxnGlEG 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:18.447 04:14:00 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:18.447 04:14:00 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:18.447 04:14:00 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:18.447 04:14:00 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:18.447 04:14:00 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:18.447 04:14:00 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Y31fxnGlEG 00:24:18.447 04:14:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Y31fxnGlEG 00:24:18.705 04:14:00 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Y31fxnGlEG 00:24:18.705 04:14:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y31fxnGlEG 00:24:18.705 04:14:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y31fxnGlEG 00:24:18.705 04:14:00 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:18.705 04:14:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:18.963 nvme0n1 00:24:18.963 04:14:00 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:24:18.963 04:14:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:18.963 04:14:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:18.963 04:14:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:18.963 04:14:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:18.963 04:14:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.530 04:14:01 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:24:19.530 04:14:01 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:24:19.530 04:14:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:19.530 04:14:01 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:24:19.530 04:14:01 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:24:19.530 04:14:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:19.530 04:14:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.530 04:14:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.787 04:14:01 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:24:19.787 04:14:01 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:24:19.787 04:14:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:19.787 04:14:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:19.787 04:14:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:19.787 04:14:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.787 04:14:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.046 04:14:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:24:20.046 04:14:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:20.046 04:14:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:20.303 04:14:02 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:24:20.303 04:14:02 keyring_file -- keyring/file.sh@105 -- # jq length 00:24:20.303 04:14:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.561 04:14:02 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:24:20.561 04:14:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y31fxnGlEG 00:24:20.561 04:14:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y31fxnGlEG 00:24:20.819 04:14:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9zsROY6OuC 00:24:20.819 04:14:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9zsROY6OuC 00:24:20.819 04:14:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.819 04:14:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:21.388 nvme0n1 00:24:21.388 04:14:03 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:24:21.388 04:14:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:21.662 04:14:03 keyring_file -- keyring/file.sh@113 -- # config='{ 00:24:21.662 "subsystems": [ 00:24:21.662 { 00:24:21.662 "subsystem": "keyring", 00:24:21.662 "config": [ 00:24:21.662 { 00:24:21.662 "method": "keyring_file_add_key", 00:24:21.662 "params": { 00:24:21.662 "name": "key0", 00:24:21.662 "path": "/tmp/tmp.Y31fxnGlEG" 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "keyring_file_add_key", 00:24:21.662 "params": { 00:24:21.662 "name": "key1", 00:24:21.662 "path": "/tmp/tmp.9zsROY6OuC" 00:24:21.662 } 00:24:21.662 } 00:24:21.662 ] 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "subsystem": "iobuf", 00:24:21.662 "config": [ 00:24:21.662 { 00:24:21.662 "method": "iobuf_set_options", 00:24:21.662 "params": { 00:24:21.662 "small_pool_count": 8192, 00:24:21.662 "large_pool_count": 1024, 00:24:21.662 "small_bufsize": 8192, 00:24:21.662 "large_bufsize": 135168, 00:24:21.662 "enable_numa": false 00:24:21.662 } 00:24:21.662 } 00:24:21.662 ] 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "subsystem": "sock", 00:24:21.662 "config": [ 00:24:21.662 { 00:24:21.662 "method": "sock_set_default_impl", 00:24:21.662 "params": { 00:24:21.662 "impl_name": "uring" 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "sock_impl_set_options", 00:24:21.662 "params": { 00:24:21.662 "impl_name": "ssl", 00:24:21.662 "recv_buf_size": 4096, 00:24:21.662 "send_buf_size": 4096, 00:24:21.662 "enable_recv_pipe": true, 00:24:21.662 "enable_quickack": false, 00:24:21.662 "enable_placement_id": 0, 00:24:21.662 "enable_zerocopy_send_server": true, 00:24:21.662 "enable_zerocopy_send_client": false, 00:24:21.662 "zerocopy_threshold": 0, 00:24:21.662 "tls_version": 0, 00:24:21.662 "enable_ktls": false 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "sock_impl_set_options", 00:24:21.662 "params": { 00:24:21.662 "impl_name": "posix", 00:24:21.662 "recv_buf_size": 2097152, 00:24:21.662 "send_buf_size": 2097152, 00:24:21.662 "enable_recv_pipe": true, 00:24:21.662 "enable_quickack": false, 00:24:21.662 "enable_placement_id": 0, 00:24:21.662 "enable_zerocopy_send_server": true, 00:24:21.662 "enable_zerocopy_send_client": false, 00:24:21.662 "zerocopy_threshold": 0, 00:24:21.662 "tls_version": 0, 00:24:21.662 "enable_ktls": false 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "sock_impl_set_options", 00:24:21.662 "params": { 00:24:21.662 "impl_name": "uring", 00:24:21.662 "recv_buf_size": 2097152, 00:24:21.662 "send_buf_size": 2097152, 00:24:21.662 "enable_recv_pipe": true, 00:24:21.662 "enable_quickack": false, 00:24:21.662 "enable_placement_id": 0, 00:24:21.662 "enable_zerocopy_send_server": false, 00:24:21.662 "enable_zerocopy_send_client": false, 00:24:21.662 "zerocopy_threshold": 0, 00:24:21.662 "tls_version": 0, 00:24:21.662 "enable_ktls": false 00:24:21.662 } 00:24:21.662 } 00:24:21.662 ] 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "subsystem": "vmd", 00:24:21.662 "config": [] 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "subsystem": "accel", 00:24:21.662 "config": [ 00:24:21.662 { 00:24:21.662 "method": "accel_set_options", 00:24:21.662 "params": { 00:24:21.662 "small_cache_size": 128, 00:24:21.662 "large_cache_size": 16, 00:24:21.662 "task_count": 2048, 00:24:21.662 "sequence_count": 2048, 00:24:21.662 "buf_count": 2048 00:24:21.662 } 00:24:21.662 } 00:24:21.662 ] 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "subsystem": "bdev", 00:24:21.662 "config": [ 00:24:21.662 { 00:24:21.662 "method": "bdev_set_options", 00:24:21.662 "params": { 00:24:21.662 "bdev_io_pool_size": 65535, 00:24:21.662 "bdev_io_cache_size": 256, 00:24:21.662 "bdev_auto_examine": true, 00:24:21.662 "iobuf_small_cache_size": 128, 00:24:21.662 "iobuf_large_cache_size": 16 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "bdev_raid_set_options", 00:24:21.662 "params": { 00:24:21.662 "process_window_size_kb": 1024, 00:24:21.662 "process_max_bandwidth_mb_sec": 0 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "bdev_iscsi_set_options", 00:24:21.662 "params": { 00:24:21.662 "timeout_sec": 30 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "bdev_nvme_set_options", 00:24:21.662 "params": { 00:24:21.662 "action_on_timeout": "none", 00:24:21.662 "timeout_us": 0, 00:24:21.662 "timeout_admin_us": 0, 00:24:21.662 "keep_alive_timeout_ms": 10000, 00:24:21.662 "arbitration_burst": 0, 00:24:21.662 "low_priority_weight": 0, 00:24:21.662 "medium_priority_weight": 0, 00:24:21.662 "high_priority_weight": 0, 00:24:21.662 "nvme_adminq_poll_period_us": 10000, 00:24:21.662 "nvme_ioq_poll_period_us": 0, 00:24:21.662 "io_queue_requests": 512, 00:24:21.662 "delay_cmd_submit": true, 00:24:21.662 "transport_retry_count": 4, 00:24:21.662 "bdev_retry_count": 3, 00:24:21.662 "transport_ack_timeout": 0, 00:24:21.662 "ctrlr_loss_timeout_sec": 0, 00:24:21.662 "reconnect_delay_sec": 0, 00:24:21.662 "fast_io_fail_timeout_sec": 0, 00:24:21.662 "disable_auto_failback": false, 00:24:21.662 "generate_uuids": false, 00:24:21.662 "transport_tos": 0, 00:24:21.662 "nvme_error_stat": false, 00:24:21.662 "rdma_srq_size": 0, 00:24:21.662 "io_path_stat": false, 00:24:21.662 "allow_accel_sequence": false, 00:24:21.662 "rdma_max_cq_size": 0, 00:24:21.662 "rdma_cm_event_timeout_ms": 0, 00:24:21.662 "dhchap_digests": [ 00:24:21.662 "sha256", 00:24:21.662 "sha384", 00:24:21.662 "sha512" 00:24:21.662 ], 00:24:21.662 "dhchap_dhgroups": [ 00:24:21.662 "null", 00:24:21.662 "ffdhe2048", 00:24:21.662 "ffdhe3072", 00:24:21.662 "ffdhe4096", 00:24:21.662 "ffdhe6144", 00:24:21.662 "ffdhe8192" 00:24:21.662 ] 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "bdev_nvme_attach_controller", 00:24:21.662 "params": { 00:24:21.662 "name": "nvme0", 00:24:21.662 "trtype": "TCP", 00:24:21.662 "adrfam": "IPv4", 00:24:21.662 "traddr": "127.0.0.1", 00:24:21.662 "trsvcid": "4420", 00:24:21.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:21.662 "prchk_reftag": false, 00:24:21.662 "prchk_guard": false, 00:24:21.662 "ctrlr_loss_timeout_sec": 0, 00:24:21.662 "reconnect_delay_sec": 0, 00:24:21.662 "fast_io_fail_timeout_sec": 0, 00:24:21.662 "psk": "key0", 00:24:21.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:21.662 "hdgst": false, 00:24:21.662 "ddgst": false, 00:24:21.662 "multipath": "multipath" 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "bdev_nvme_set_hotplug", 00:24:21.662 "params": { 00:24:21.662 "period_us": 100000, 00:24:21.662 "enable": false 00:24:21.662 } 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "method": "bdev_wait_for_examine" 00:24:21.662 } 00:24:21.662 ] 00:24:21.662 }, 00:24:21.662 { 00:24:21.662 "subsystem": "nbd", 00:24:21.662 "config": [] 00:24:21.662 } 00:24:21.662 ] 00:24:21.662 }' 00:24:21.662 04:14:03 keyring_file -- keyring/file.sh@115 -- # killprocess 85814 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85814 ']' 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85814 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85814 00:24:21.662 killing process with pid 85814 00:24:21.662 Received shutdown signal, test time was about 1.000000 seconds 00:24:21.662 00:24:21.662 Latency(us) 00:24:21.662 [2024-12-09T04:14:03.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.662 [2024-12-09T04:14:03.612Z] =================================================================================================================== 00:24:21.662 [2024-12-09T04:14:03.612Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85814' 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@973 -- # kill 85814 00:24:21.662 04:14:03 keyring_file -- common/autotest_common.sh@978 -- # wait 85814 00:24:21.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:21.959 04:14:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=86051 00:24:21.959 04:14:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 86051 /var/tmp/bperf.sock 00:24:21.959 04:14:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86051 ']' 00:24:21.959 04:14:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:21.959 04:14:03 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:21.959 04:14:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.959 04:14:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:21.959 04:14:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.959 04:14:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:24:21.959 "subsystems": [ 00:24:21.959 { 00:24:21.959 "subsystem": "keyring", 00:24:21.959 "config": [ 00:24:21.959 { 00:24:21.959 "method": "keyring_file_add_key", 00:24:21.959 "params": { 00:24:21.959 "name": "key0", 00:24:21.959 "path": "/tmp/tmp.Y31fxnGlEG" 00:24:21.959 } 00:24:21.959 }, 00:24:21.959 { 00:24:21.959 "method": "keyring_file_add_key", 00:24:21.959 "params": { 00:24:21.959 "name": "key1", 00:24:21.959 "path": "/tmp/tmp.9zsROY6OuC" 00:24:21.959 } 00:24:21.959 } 00:24:21.959 ] 00:24:21.959 }, 00:24:21.959 { 00:24:21.959 "subsystem": "iobuf", 00:24:21.959 "config": [ 00:24:21.959 { 00:24:21.959 "method": "iobuf_set_options", 00:24:21.959 "params": { 00:24:21.959 "small_pool_count": 8192, 00:24:21.959 "large_pool_count": 1024, 00:24:21.959 "small_bufsize": 8192, 00:24:21.959 "large_bufsize": 135168, 00:24:21.959 "enable_numa": false 00:24:21.959 } 00:24:21.959 } 00:24:21.959 ] 00:24:21.959 }, 00:24:21.959 { 00:24:21.959 "subsystem": "sock", 00:24:21.959 "config": [ 00:24:21.959 { 00:24:21.959 "method": "sock_set_default_impl", 00:24:21.959 "params": { 00:24:21.959 "impl_name": "uring" 00:24:21.959 } 00:24:21.959 }, 00:24:21.959 { 00:24:21.959 "method": "sock_impl_set_options", 00:24:21.959 "params": { 00:24:21.959 "impl_name": "ssl", 00:24:21.959 "recv_buf_size": 4096, 00:24:21.959 "send_buf_size": 4096, 00:24:21.959 "enable_recv_pipe": true, 00:24:21.959 "enable_quickack": false, 00:24:21.959 "enable_placement_id": 0, 00:24:21.959 "enable_zerocopy_send_server": true, 00:24:21.959 "enable_zerocopy_send_client": false, 00:24:21.959 "zerocopy_threshold": 0, 00:24:21.959 "tls_version": 0, 00:24:21.959 "enable_ktls": false 00:24:21.959 } 00:24:21.959 }, 00:24:21.959 { 00:24:21.959 "method": "sock_impl_set_options", 00:24:21.959 "params": { 00:24:21.959 "impl_name": "posix", 00:24:21.959 "recv_buf_size": 2097152, 00:24:21.959 "send_buf_size": 2097152, 00:24:21.959 "enable_recv_pipe": true, 00:24:21.959 "enable_quickack": false, 00:24:21.959 "enable_placement_id": 0, 00:24:21.959 "enable_zerocopy_send_server": true, 00:24:21.959 "enable_zerocopy_send_client": false, 00:24:21.959 "zerocopy_threshold": 0, 00:24:21.959 "tls_version": 0, 00:24:21.959 "enable_ktls": false 00:24:21.959 } 00:24:21.959 }, 00:24:21.959 { 00:24:21.959 "method": "sock_impl_set_options", 00:24:21.959 "params": { 00:24:21.959 "impl_name": "uring", 00:24:21.959 "recv_buf_size": 2097152, 00:24:21.959 "send_buf_size": 2097152, 00:24:21.959 "enable_recv_pipe": true, 00:24:21.959 "enable_quickack": false, 00:24:21.959 "enable_placement_id": 0, 00:24:21.959 "enable_zerocopy_send_server": false, 00:24:21.959 "enable_zerocopy_send_client": false, 00:24:21.959 "zerocopy_threshold": 0, 00:24:21.959 "tls_version": 0, 00:24:21.959 "enable_ktls": false 00:24:21.959 } 00:24:21.959 } 00:24:21.960 ] 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "subsystem": "vmd", 00:24:21.960 "config": [] 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "subsystem": "accel", 00:24:21.960 "config": [ 00:24:21.960 { 00:24:21.960 "method": "accel_set_options", 00:24:21.960 "params": { 00:24:21.960 "small_cache_size": 128, 00:24:21.960 "large_cache_size": 16, 00:24:21.960 "task_count": 2048, 00:24:21.960 "sequence_count": 2048, 00:24:21.960 "buf_count": 2048 00:24:21.960 } 00:24:21.960 } 00:24:21.960 ] 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "subsystem": "bdev", 00:24:21.960 "config": [ 00:24:21.960 { 00:24:21.960 "method": "bdev_set_options", 00:24:21.960 "params": { 00:24:21.960 "bdev_io_pool_size": 65535, 00:24:21.960 "bdev_io_cache_size": 256, 00:24:21.960 "bdev_auto_examine": true, 00:24:21.960 "iobuf_small_cache_size": 128, 00:24:21.960 "iobuf_large_cache_size": 16 00:24:21.960 } 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "method": "bdev_raid_set_options", 00:24:21.960 "params": { 00:24:21.960 "process_window_size_kb": 1024, 00:24:21.960 "process_max_bandwidth_mb_sec": 0 00:24:21.960 } 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "method": "bdev_iscsi_set_options", 00:24:21.960 "params": { 00:24:21.960 "timeout_sec": 30 00:24:21.960 } 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "method": "bdev_nvme_set_options", 00:24:21.960 "params": { 00:24:21.960 "action_on_timeout": "none", 00:24:21.960 "timeout_us": 0, 00:24:21.960 "timeout_admin_us": 0, 00:24:21.960 "keep_alive_timeout_ms": 10000, 00:24:21.960 "arbitration_burst": 0, 00:24:21.960 "low_priority_weight": 0, 00:24:21.960 "medium_priority_weight": 0, 00:24:21.960 "high_priority_weight": 0, 00:24:21.960 "nvme_adminq_poll_period_us": 10000, 00:24:21.960 "nvme_ioq_poll_period_us": 0, 00:24:21.960 "io_queue_requests": 512, 00:24:21.960 "delay_cmd_submit": true, 00:24:21.960 "transport_retry_count": 4, 00:24:21.960 "bdev_retry_count": 3, 00:24:21.960 "transport_ack_timeout": 0, 00:24:21.960 "ctrlr_loss_timeout_sec": 0, 00:24:21.960 "reconnect_delay_sec": 0, 00:24:21.960 "fast_io_fail_timeout_sec": 0, 00:24:21.960 "disable_auto_failback": false, 00:24:21.960 "generate_uuids": false, 00:24:21.960 "transport_tos": 0, 00:24:21.960 "nvme_error_stat": false, 00:24:21.960 "rdma_srq_size": 0, 00:24:21.960 "io_path_stat": false, 00:24:21.960 "allow_accel_sequence": false, 00:24:21.960 "rdma_max_cq_size": 0, 00:24:21.960 "rdma_cm_event_timeout_ms": 0, 00:24:21.960 "dhchap_digests": [ 00:24:21.960 "sha256", 00:24:21.960 "sha384", 00:24:21.960 "sha512" 00:24:21.960 ], 00:24:21.960 "dhchap_dhgroups": [ 00:24:21.960 "null", 00:24:21.960 "ffdhe2048", 00:24:21.960 "ffdhe3072", 00:24:21.960 "ffdhe4096", 00:24:21.960 "ffdhe6144", 00:24:21.960 "ffdhe8192" 00:24:21.960 ] 00:24:21.960 } 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "method": "bdev_nvme_attach_controller", 00:24:21.960 "params": { 00:24:21.960 "name": "nvme0", 00:24:21.960 "trtype": "TCP", 00:24:21.960 "adrfam": "IPv4", 00:24:21.960 "traddr": "127.0.0.1", 00:24:21.960 "trsvcid": "4420", 00:24:21.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:21.960 "prchk_reftag": false, 00:24:21.960 "prchk_guard": false, 00:24:21.960 "ctrlr_loss_timeout_sec": 0, 00:24:21.960 "reconnect_delay_sec": 0, 00:24:21.960 "fast_io_fail_timeout_sec": 0, 00:24:21.960 "psk": "key0", 00:24:21.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:21.960 "hdgst": false, 00:24:21.960 "ddgst": false, 00:24:21.960 "multipath": "multipath" 00:24:21.960 } 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "method": "bdev_nvme_set_hotplug", 00:24:21.960 "params": { 00:24:21.960 "period_us": 100000, 00:24:21.960 "enable": false 00:24:21.960 } 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "method": "bdev_wait_for_examine" 00:24:21.960 } 00:24:21.960 ] 00:24:21.960 }, 00:24:21.960 { 00:24:21.960 "subsystem": "nbd", 00:24:21.960 "config": [] 00:24:21.960 } 00:24:21.960 ] 00:24:21.960 }' 00:24:21.960 04:14:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:21.960 [2024-12-09 04:14:03.720561] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:24:21.960 [2024-12-09 04:14:03.720644] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86051 ] 00:24:21.960 [2024-12-09 04:14:03.858103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.234 [2024-12-09 04:14:03.902019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.234 [2024-12-09 04:14:04.054720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:22.234 [2024-12-09 04:14:04.122279] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.801 04:14:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.801 04:14:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:22.801 04:14:04 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:24:22.801 04:14:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.801 04:14:04 keyring_file -- keyring/file.sh@121 -- # jq length 00:24:23.111 04:14:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:23.111 04:14:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:24:23.111 04:14:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:23.111 04:14:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:23.111 04:14:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:23.111 04:14:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:23.111 04:14:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.373 04:14:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:24:23.373 04:14:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:24:23.373 04:14:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:23.373 04:14:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:23.373 04:14:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:23.373 04:14:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.373 04:14:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:23.630 04:14:05 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:24:23.630 04:14:05 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:24:23.630 04:14:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:23.630 04:14:05 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:24:23.887 04:14:05 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:24:23.887 04:14:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:23.887 04:14:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Y31fxnGlEG /tmp/tmp.9zsROY6OuC 00:24:23.887 04:14:05 keyring_file -- keyring/file.sh@20 -- # killprocess 86051 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86051 ']' 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86051 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86051 00:24:23.887 killing process with pid 86051 00:24:23.887 Received shutdown signal, test time was about 1.000000 seconds 00:24:23.887 00:24:23.887 Latency(us) 00:24:23.887 [2024-12-09T04:14:05.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.887 [2024-12-09T04:14:05.837Z] =================================================================================================================== 00:24:23.887 [2024-12-09T04:14:05.837Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86051' 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@973 -- # kill 86051 00:24:23.887 04:14:05 keyring_file -- common/autotest_common.sh@978 -- # wait 86051 00:24:24.145 04:14:05 keyring_file -- keyring/file.sh@21 -- # killprocess 85804 00:24:24.145 04:14:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85804 ']' 00:24:24.145 04:14:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85804 00:24:24.145 04:14:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:24.145 04:14:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.145 04:14:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85804 00:24:24.145 killing process with pid 85804 00:24:24.145 04:14:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.145 04:14:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.145 04:14:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85804' 00:24:24.145 04:14:06 keyring_file -- common/autotest_common.sh@973 -- # kill 85804 00:24:24.145 04:14:06 keyring_file -- common/autotest_common.sh@978 -- # wait 85804 00:24:24.713 00:24:24.713 real 0m14.457s 00:24:24.713 user 0m35.823s 00:24:24.713 sys 0m3.173s 00:24:24.713 04:14:06 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:24.713 ************************************ 00:24:24.713 END TEST keyring_file 00:24:24.713 ************************************ 00:24:24.713 04:14:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:24.713 04:14:06 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:24:24.713 04:14:06 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:24.713 04:14:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:24.713 04:14:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.713 04:14:06 -- common/autotest_common.sh@10 -- # set +x 00:24:24.713 ************************************ 00:24:24.713 START TEST keyring_linux 00:24:24.713 ************************************ 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:24.713 Joined session keyring: 158246990 00:24:24.713 * Looking for test storage... 00:24:24.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@345 -- # : 1 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.713 04:14:06 keyring_linux -- scripts/common.sh@368 -- # return 0 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:24.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.713 --rc genhtml_branch_coverage=1 00:24:24.713 --rc genhtml_function_coverage=1 00:24:24.713 --rc genhtml_legend=1 00:24:24.713 --rc geninfo_all_blocks=1 00:24:24.713 --rc geninfo_unexecuted_blocks=1 00:24:24.713 00:24:24.713 ' 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:24.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.713 --rc genhtml_branch_coverage=1 00:24:24.713 --rc genhtml_function_coverage=1 00:24:24.713 --rc genhtml_legend=1 00:24:24.713 --rc geninfo_all_blocks=1 00:24:24.713 --rc geninfo_unexecuted_blocks=1 00:24:24.713 00:24:24.713 ' 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:24.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.713 --rc genhtml_branch_coverage=1 00:24:24.713 --rc genhtml_function_coverage=1 00:24:24.713 --rc genhtml_legend=1 00:24:24.713 --rc geninfo_all_blocks=1 00:24:24.713 --rc geninfo_unexecuted_blocks=1 00:24:24.713 00:24:24.713 ' 00:24:24.713 04:14:06 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:24.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.713 --rc genhtml_branch_coverage=1 00:24:24.713 --rc genhtml_function_coverage=1 00:24:24.713 --rc genhtml_legend=1 00:24:24.713 --rc geninfo_all_blocks=1 00:24:24.713 --rc geninfo_unexecuted_blocks=1 00:24:24.713 00:24:24.713 ' 00:24:24.713 04:14:06 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:24.713 04:14:06 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:24.713 04:14:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:24.713 04:14:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.713 04:14:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=9ed3da8d-b493-400f-8e42-fb307dd7edcc 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:24.714 04:14:06 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.714 04:14:06 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.714 04:14:06 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.714 04:14:06 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.714 04:14:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.714 04:14:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.714 04:14:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.714 04:14:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:24.714 04:14:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.714 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.714 04:14:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:24.714 04:14:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:24.714 04:14:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:24.714 04:14:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:24.714 04:14:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:24.714 04:14:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:24.714 04:14:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:24.714 04:14:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:24.714 04:14:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:24.714 04:14:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:24.714 04:14:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:24.714 04:14:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:24.714 04:14:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:24.714 04:14:06 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:24.972 /tmp/:spdk-test:key0 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:24.972 04:14:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:24.972 04:14:06 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:24.972 04:14:06 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:24.972 04:14:06 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:24.972 04:14:06 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:24.972 04:14:06 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:24.972 04:14:06 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:24.972 /tmp/:spdk-test:key1 00:24:24.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.972 04:14:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:24.972 04:14:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86177 00:24:24.972 04:14:06 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:24.972 04:14:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86177 00:24:24.972 04:14:06 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86177 ']' 00:24:24.972 04:14:06 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.972 04:14:06 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.972 04:14:06 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.972 04:14:06 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.972 04:14:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:24.972 [2024-12-09 04:14:06.832879] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:24:24.973 [2024-12-09 04:14:06.833201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86177 ] 00:24:25.231 [2024-12-09 04:14:06.971314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.231 [2024-12-09 04:14:07.011009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.231 [2024-12-09 04:14:07.077114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:26.166 04:14:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:26.166 [2024-12-09 04:14:07.788418] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.166 null0 00:24:26.166 [2024-12-09 04:14:07.820396] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.166 [2024-12-09 04:14:07.820712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.166 04:14:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:26.166 425825228 00:24:26.166 04:14:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:26.166 667937006 00:24:26.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:26.166 04:14:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86195 00:24:26.166 04:14:07 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:26.166 04:14:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86195 /var/tmp/bperf.sock 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86195 ']' 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.166 04:14:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:26.166 [2024-12-09 04:14:07.903868] Starting SPDK v25.01-pre git sha1 5f032e8b7 / DPDK 24.03.0 initialization... 00:24:26.166 [2024-12-09 04:14:07.904150] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86195 ] 00:24:26.166 [2024-12-09 04:14:08.054399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.424 [2024-12-09 04:14:08.127529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.991 04:14:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.991 04:14:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:26.991 04:14:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:26.991 04:14:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:27.249 04:14:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:27.249 04:14:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:27.507 [2024-12-09 04:14:09.315146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:27.507 04:14:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:27.507 04:14:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:27.765 [2024-12-09 04:14:09.631039] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.765 nvme0n1 00:24:27.765 04:14:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:27.765 04:14:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:27.765 04:14:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:28.024 04:14:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:28.024 04:14:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:28.024 04:14:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:28.282 04:14:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:28.282 04:14:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:28.282 04:14:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:28.282 04:14:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:28.282 04:14:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:28.282 04:14:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:28.282 04:14:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:28.282 04:14:10 keyring_linux -- keyring/linux.sh@25 -- # sn=425825228 00:24:28.282 04:14:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:28.282 04:14:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:28.282 04:14:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 425825228 == \4\2\5\8\2\5\2\2\8 ]] 00:24:28.282 04:14:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 425825228 00:24:28.282 04:14:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:28.282 04:14:10 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:28.540 Running I/O for 1 seconds... 00:24:29.475 13594.00 IOPS, 53.10 MiB/s 00:24:29.476 Latency(us) 00:24:29.476 [2024-12-09T04:14:11.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.476 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:29.476 nvme0n1 : 1.01 13594.77 53.10 0.00 0.00 9365.92 3902.37 12392.26 00:24:29.476 [2024-12-09T04:14:11.426Z] =================================================================================================================== 00:24:29.476 [2024-12-09T04:14:11.426Z] Total : 13594.77 53.10 0.00 0.00 9365.92 3902.37 12392.26 00:24:29.476 { 00:24:29.476 "results": [ 00:24:29.476 { 00:24:29.476 "job": "nvme0n1", 00:24:29.476 "core_mask": "0x2", 00:24:29.476 "workload": "randread", 00:24:29.476 "status": "finished", 00:24:29.476 "queue_depth": 128, 00:24:29.476 "io_size": 4096, 00:24:29.476 "runtime": 1.009359, 00:24:29.476 "iops": 13594.766579581696, 00:24:29.476 "mibps": 53.104556951491, 00:24:29.476 "io_failed": 0, 00:24:29.476 "io_timeout": 0, 00:24:29.476 "avg_latency_us": 9365.923987756885, 00:24:29.476 "min_latency_us": 3902.370909090909, 00:24:29.476 "max_latency_us": 12392.261818181818 00:24:29.476 } 00:24:29.476 ], 00:24:29.476 "core_count": 1 00:24:29.476 } 00:24:29.476 04:14:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:29.476 04:14:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:29.734 04:14:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:29.734 04:14:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:29.734 04:14:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:29.734 04:14:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:29.734 04:14:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:29.734 04:14:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:29.993 04:14:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:29.993 04:14:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:29.993 04:14:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:29.993 04:14:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:29.993 04:14:11 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:24:29.993 04:14:11 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:29.993 04:14:11 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:29.993 04:14:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.993 04:14:11 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:29.993 04:14:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.993 04:14:11 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:29.993 04:14:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:30.251 [2024-12-09 04:14:12.143465] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:30.251 [2024-12-09 04:14:12.144321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e31d0 (107): Transport endpoint is not connected 00:24:30.251 [2024-12-09 04:14:12.145310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e31d0 (9): Bad file descriptor 00:24:30.251 [2024-12-09 04:14:12.146308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:30.251 [2024-12-09 04:14:12.146358] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:30.251 [2024-12-09 04:14:12.146371] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:30.251 [2024-12-09 04:14:12.146382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:30.251 request: 00:24:30.251 { 00:24:30.251 "name": "nvme0", 00:24:30.251 "trtype": "tcp", 00:24:30.251 "traddr": "127.0.0.1", 00:24:30.251 "adrfam": "ipv4", 00:24:30.251 "trsvcid": "4420", 00:24:30.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:30.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:30.251 "prchk_reftag": false, 00:24:30.251 "prchk_guard": false, 00:24:30.251 "hdgst": false, 00:24:30.251 "ddgst": false, 00:24:30.251 "psk": ":spdk-test:key1", 00:24:30.251 "allow_unrecognized_csi": false, 00:24:30.251 "method": "bdev_nvme_attach_controller", 00:24:30.251 "req_id": 1 00:24:30.251 } 00:24:30.251 Got JSON-RPC error response 00:24:30.251 response: 00:24:30.251 { 00:24:30.252 "code": -5, 00:24:30.252 "message": "Input/output error" 00:24:30.252 } 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@33 -- # sn=425825228 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 425825228 00:24:30.252 1 links removed 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@33 -- # sn=667937006 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 667937006 00:24:30.252 1 links removed 00:24:30.252 04:14:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86195 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86195 ']' 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86195 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.252 04:14:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86195 00:24:30.509 killing process with pid 86195 00:24:30.509 Received shutdown signal, test time was about 1.000000 seconds 00:24:30.509 00:24:30.509 Latency(us) 00:24:30.509 [2024-12-09T04:14:12.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.509 [2024-12-09T04:14:12.459Z] =================================================================================================================== 00:24:30.509 [2024-12-09T04:14:12.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86195' 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 86195 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 86195 00:24:30.509 04:14:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86177 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86177 ']' 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86177 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.509 04:14:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86177 00:24:30.767 killing process with pid 86177 00:24:30.767 04:14:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.767 04:14:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.767 04:14:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86177' 00:24:30.767 04:14:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 86177 00:24:30.767 04:14:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 86177 00:24:31.025 ************************************ 00:24:31.025 END TEST keyring_linux 00:24:31.025 ************************************ 00:24:31.025 00:24:31.025 real 0m6.404s 00:24:31.025 user 0m12.161s 00:24:31.025 sys 0m1.637s 00:24:31.025 04:14:12 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.025 04:14:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:31.025 04:14:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:31.025 04:14:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:31.025 04:14:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:31.025 04:14:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:31.025 04:14:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:31.025 04:14:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:31.025 04:14:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:31.025 04:14:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.025 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:24:31.025 04:14:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:31.025 04:14:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:31.025 04:14:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:31.025 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:24:32.921 INFO: APP EXITING 00:24:32.921 INFO: killing all VMs 00:24:32.921 INFO: killing vhost app 00:24:32.921 INFO: EXIT DONE 00:24:33.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:33.853 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:33.853 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:34.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:34.419 Cleaning 00:24:34.419 Removing: /var/run/dpdk/spdk0/config 00:24:34.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:34.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:34.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:34.419 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:34.419 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:34.419 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:34.419 Removing: /var/run/dpdk/spdk1/config 00:24:34.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:34.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:34.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:34.419 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:34.419 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:34.419 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:34.677 Removing: /var/run/dpdk/spdk2/config 00:24:34.677 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:34.677 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:34.677 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:34.677 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:34.677 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:34.677 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:34.677 Removing: /var/run/dpdk/spdk3/config 00:24:34.677 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:34.677 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:34.677 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:34.677 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:34.677 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:34.677 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:34.677 Removing: /var/run/dpdk/spdk4/config 00:24:34.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:34.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:34.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:34.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:34.677 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:34.677 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:34.677 Removing: /dev/shm/nvmf_trace.0 00:24:34.677 Removing: /dev/shm/spdk_tgt_trace.pid56904 00:24:34.677 Removing: /var/run/dpdk/spdk0 00:24:34.677 Removing: /var/run/dpdk/spdk1 00:24:34.677 Removing: /var/run/dpdk/spdk2 00:24:34.677 Removing: /var/run/dpdk/spdk3 00:24:34.677 Removing: /var/run/dpdk/spdk4 00:24:34.677 Removing: /var/run/dpdk/spdk_pid56744 00:24:34.677 Removing: /var/run/dpdk/spdk_pid56904 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57110 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57196 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57229 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57339 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57357 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57496 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57697 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57851 00:24:34.677 Removing: /var/run/dpdk/spdk_pid57936 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58020 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58119 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58202 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58235 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58270 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58340 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58445 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58890 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58935 00:24:34.677 Removing: /var/run/dpdk/spdk_pid58991 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59007 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59080 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59096 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59173 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59182 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59230 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59252 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59298 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59318 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59454 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59488 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59572 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59904 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59916 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59952 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59966 00:24:34.677 Removing: /var/run/dpdk/spdk_pid59987 00:24:34.677 Removing: /var/run/dpdk/spdk_pid60006 00:24:34.677 Removing: /var/run/dpdk/spdk_pid60022 00:24:34.677 Removing: /var/run/dpdk/spdk_pid60037 00:24:34.677 Removing: /var/run/dpdk/spdk_pid60056 00:24:34.677 Removing: /var/run/dpdk/spdk_pid60075 00:24:34.677 Removing: /var/run/dpdk/spdk_pid60096 00:24:34.677 Removing: /var/run/dpdk/spdk_pid60115 00:24:34.677 Removing: /var/run/dpdk/spdk_pid60123 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60144 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60163 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60183 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60193 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60212 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60231 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60252 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60277 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60296 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60331 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60400 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60426 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60441 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60470 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60479 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60487 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60529 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60548 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60577 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60586 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60596 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60605 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60615 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60624 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60634 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60643 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60677 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60704 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60713 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60742 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60751 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60764 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60805 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60822 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60848 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60856 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60869 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60882 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60884 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60900 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60913 00:24:34.935 Removing: /var/run/dpdk/spdk_pid60919 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61008 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61061 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61185 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61218 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61263 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61283 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61305 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61325 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61362 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61378 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61456 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61477 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61527 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61595 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61667 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61696 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61803 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61851 00:24:34.935 Removing: /var/run/dpdk/spdk_pid61878 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62110 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62213 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62246 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62271 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62310 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62344 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62377 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62414 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62816 00:24:34.935 Removing: /var/run/dpdk/spdk_pid62854 00:24:34.935 Removing: /var/run/dpdk/spdk_pid63205 00:24:34.935 Removing: /var/run/dpdk/spdk_pid63665 00:24:34.935 Removing: /var/run/dpdk/spdk_pid63946 00:24:34.935 Removing: /var/run/dpdk/spdk_pid64845 00:24:34.936 Removing: /var/run/dpdk/spdk_pid65771 00:24:34.936 Removing: /var/run/dpdk/spdk_pid65889 00:24:34.936 Removing: /var/run/dpdk/spdk_pid65961 00:24:34.936 Removing: /var/run/dpdk/spdk_pid67399 00:24:34.936 Removing: /var/run/dpdk/spdk_pid67710 00:24:35.193 Removing: /var/run/dpdk/spdk_pid71457 00:24:35.193 Removing: /var/run/dpdk/spdk_pid71826 00:24:35.193 Removing: /var/run/dpdk/spdk_pid71935 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72064 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72098 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72131 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72161 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72253 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72389 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72545 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72627 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72823 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72899 00:24:35.193 Removing: /var/run/dpdk/spdk_pid72998 00:24:35.193 Removing: /var/run/dpdk/spdk_pid73357 00:24:35.193 Removing: /var/run/dpdk/spdk_pid73764 00:24:35.193 Removing: /var/run/dpdk/spdk_pid73765 00:24:35.193 Removing: /var/run/dpdk/spdk_pid73766 00:24:35.193 Removing: /var/run/dpdk/spdk_pid74035 00:24:35.193 Removing: /var/run/dpdk/spdk_pid74294 00:24:35.193 Removing: /var/run/dpdk/spdk_pid74674 00:24:35.193 Removing: /var/run/dpdk/spdk_pid74682 00:24:35.193 Removing: /var/run/dpdk/spdk_pid75006 00:24:35.193 Removing: /var/run/dpdk/spdk_pid75027 00:24:35.193 Removing: /var/run/dpdk/spdk_pid75041 00:24:35.193 Removing: /var/run/dpdk/spdk_pid75073 00:24:35.193 Removing: /var/run/dpdk/spdk_pid75078 00:24:35.193 Removing: /var/run/dpdk/spdk_pid75442 00:24:35.193 Removing: /var/run/dpdk/spdk_pid75485 00:24:35.193 Removing: /var/run/dpdk/spdk_pid75823 00:24:35.193 Removing: /var/run/dpdk/spdk_pid76023 00:24:35.193 Removing: /var/run/dpdk/spdk_pid76463 00:24:35.193 Removing: /var/run/dpdk/spdk_pid77014 00:24:35.193 Removing: /var/run/dpdk/spdk_pid77906 00:24:35.193 Removing: /var/run/dpdk/spdk_pid78530 00:24:35.193 Removing: /var/run/dpdk/spdk_pid78536 00:24:35.193 Removing: /var/run/dpdk/spdk_pid80553 00:24:35.193 Removing: /var/run/dpdk/spdk_pid80606 00:24:35.193 Removing: /var/run/dpdk/spdk_pid80659 00:24:35.193 Removing: /var/run/dpdk/spdk_pid80720 00:24:35.193 Removing: /var/run/dpdk/spdk_pid80828 00:24:35.193 Removing: /var/run/dpdk/spdk_pid80888 00:24:35.193 Removing: /var/run/dpdk/spdk_pid80943 00:24:35.193 Removing: /var/run/dpdk/spdk_pid81009 00:24:35.193 Removing: /var/run/dpdk/spdk_pid81371 00:24:35.194 Removing: /var/run/dpdk/spdk_pid82576 00:24:35.194 Removing: /var/run/dpdk/spdk_pid82714 00:24:35.194 Removing: /var/run/dpdk/spdk_pid82946 00:24:35.194 Removing: /var/run/dpdk/spdk_pid83545 00:24:35.194 Removing: /var/run/dpdk/spdk_pid83705 00:24:35.194 Removing: /var/run/dpdk/spdk_pid83868 00:24:35.194 Removing: /var/run/dpdk/spdk_pid83965 00:24:35.194 Removing: /var/run/dpdk/spdk_pid84130 00:24:35.194 Removing: /var/run/dpdk/spdk_pid84244 00:24:35.194 Removing: /var/run/dpdk/spdk_pid84945 00:24:35.194 Removing: /var/run/dpdk/spdk_pid84980 00:24:35.194 Removing: /var/run/dpdk/spdk_pid85010 00:24:35.194 Removing: /var/run/dpdk/spdk_pid85269 00:24:35.194 Removing: /var/run/dpdk/spdk_pid85300 00:24:35.194 Removing: /var/run/dpdk/spdk_pid85334 00:24:35.194 Removing: /var/run/dpdk/spdk_pid85804 00:24:35.194 Removing: /var/run/dpdk/spdk_pid85814 00:24:35.194 Removing: /var/run/dpdk/spdk_pid86051 00:24:35.194 Removing: /var/run/dpdk/spdk_pid86177 00:24:35.194 Removing: /var/run/dpdk/spdk_pid86195 00:24:35.194 Clean 00:24:35.451 04:14:17 -- common/autotest_common.sh@1453 -- # return 0 00:24:35.451 04:14:17 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:35.451 04:14:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.451 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:24:35.451 04:14:17 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:35.451 04:14:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.451 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:24:35.451 04:14:17 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:35.451 04:14:17 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:35.451 04:14:17 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:35.451 04:14:17 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:35.451 04:14:17 -- spdk/autotest.sh@398 -- # hostname 00:24:35.452 04:14:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:35.711 geninfo: WARNING: invalid characters removed from testname! 00:24:57.639 04:14:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:00.936 04:14:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:03.470 04:14:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:05.997 04:14:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:08.589 04:14:50 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:11.120 04:14:52 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:13.022 04:14:54 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:13.022 04:14:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:13.022 04:14:54 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:13.022 04:14:54 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:13.022 04:14:54 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:13.022 04:14:54 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:13.022 + [[ -n 5252 ]] 00:25:13.022 + sudo kill 5252 00:25:13.031 [Pipeline] } 00:25:13.046 [Pipeline] // timeout 00:25:13.051 [Pipeline] } 00:25:13.065 [Pipeline] // stage 00:25:13.072 [Pipeline] } 00:25:13.086 [Pipeline] // catchError 00:25:13.095 [Pipeline] stage 00:25:13.098 [Pipeline] { (Stop VM) 00:25:13.110 [Pipeline] sh 00:25:13.388 + vagrant halt 00:25:15.918 ==> default: Halting domain... 00:25:22.493 [Pipeline] sh 00:25:22.774 + vagrant destroy -f 00:25:26.056 ==> default: Removing domain... 00:25:26.067 [Pipeline] sh 00:25:26.351 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:26.359 [Pipeline] } 00:25:26.373 [Pipeline] // stage 00:25:26.378 [Pipeline] } 00:25:26.392 [Pipeline] // dir 00:25:26.397 [Pipeline] } 00:25:26.410 [Pipeline] // wrap 00:25:26.416 [Pipeline] } 00:25:26.428 [Pipeline] // catchError 00:25:26.436 [Pipeline] stage 00:25:26.438 [Pipeline] { (Epilogue) 00:25:26.450 [Pipeline] sh 00:25:26.732 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:32.011 [Pipeline] catchError 00:25:32.012 [Pipeline] { 00:25:32.027 [Pipeline] sh 00:25:32.306 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:32.563 Artifacts sizes are good 00:25:32.572 [Pipeline] } 00:25:32.637 [Pipeline] // catchError 00:25:32.648 [Pipeline] archiveArtifacts 00:25:32.654 Archiving artifacts 00:25:32.796 [Pipeline] cleanWs 00:25:32.810 [WS-CLEANUP] Deleting project workspace... 00:25:32.810 [WS-CLEANUP] Deferred wipeout is used... 00:25:32.832 [WS-CLEANUP] done 00:25:32.833 [Pipeline] } 00:25:32.845 [Pipeline] // stage 00:25:32.850 [Pipeline] } 00:25:32.861 [Pipeline] // node 00:25:32.865 [Pipeline] End of Pipeline 00:25:32.889 Finished: SUCCESS